BAYESIAN ESTIMATION OF PROBABILITIES OF TARGET RATES USING FED FUNDS FUTURES OPTIONS

Size: px
Start display at page:

Download "BAYESIAN ESTIMATION OF PROBABILITIES OF TARGET RATES USING FED FUNDS FUTURES OPTIONS"

Transcription

1 BAYESIAN ESTIMATION OF PROBABILITIES OF TARGET RATES USING FED FUNDS FUTURES OPTIONS MARK FISHER Preliinary and Incoplete Abstract. This paper adopts a Bayesian approach to the proble of extracting the probabilities of the FOMC s target Fed Funds rate fro option prices, extending the fraework provided by Carlson et al. (2005) in a nuber of directions. Likelihood-related novelties include (i) the allowance for slippage between the target rate and the onthaverage rate and (ii) accounting for the nonnegativity of options prices by truncating the easureent error. In addition, we generalize the likelihood for target rates to include the likelihood for target-rate paths. Prior-related novelties include (i) enforcing nonnegativity constraints on the probabilities and (ii) the infored ignorance prior that allows for the inclusion of a large nuber of possible target rates (which is especially iportant for joint/path estiation). 1. Introduction As Carlson et al. (2005) point out, the proble of extracting the probabilities of the FOMC s target Fed Funds rate fro option prices is greatly siplified if one assues the target rate can take only a sall nuber of values. They use Classical least-squares regression techniques to extract the target-rate probabilities. The paper in hand adopts a Bayesian approach to inference instead, and extends the fraework provided by Carlson et al. (2005) in a nuber of directions. 1 The ain advantage to the Bayesian approach is that one gets to specify a prior distribution. The prior allows us to deal with iportant features of the target-rate probabilities in a sensible way. First, the target-rate probabilities should satisfy an equality constraint (they should su to one) and a nuber of inequality constraints (they should each be nonnegative). Although both Classical and Bayesian approaches handle the equality constraint easily, the Classical approach has difficulty handling the inequality constraints while the Bayesian approach handles the quite naturally via the prior. 2 Second, the Bayesian approach akes it easy to allow for a large nuber of target rates (or target-rate paths) in the odel. There is a siple prior for the unobserved probabilities that encapsulates what ight be called partially infored ignorance. In effect, this Date: March 21, 11:09a. I thank Jerry Dwyer, Mark Jensen, and Dan Wagonner for helpful discussions. The opinions expressed herein are those of the author and do not represent those of the Federal Reserve Bank of Atlanta or the Federal Reserve Syste. 1 The reader is referred to Carlson et al. (2005) for oitted institutional details. 2 For an early exaple of the Bayesian approach to handling inequality constraints, see Geweke (1986). 1

2 2 MARK FISHER prior asserts that only a few of the target-rate probabilities are nonnegligible without specifying which those are. Forally, this prior is a syetric Dirichlet distribution with a concentration paraeter that encourages parsiony. In addition, the concentration paraeter itself is assued to be unknown and therefore has its own prior distribution. (A closely-related prior is used in ixture odels to allow for a potentially large nuber of ixture coponents while at the sae tie encouraging parsiony. One should keep in ind, however, the current exercise is one of functional-for fitting, rather than density estiation per se.) This prior allows one to include a large nuber of target rates (greater even than the nuber of observations) and still obtain a stable and well-estiated posterior distribution. This feature has the potential to allow robust estiation of paths of interest rate targets, where the nuber of paths (and the nuber of associated joint probabilities) is quite large. Third, the Bayesian approach provides a coherent treatent of the possibility of unscheduled FOMC eetings. The ain disadvantage to the Bayesian approach is that the coputational burden is significantly higher than for the Classical approach. (Regarding this point, however, one should recognize there are no free lunches, even in statistics.) Novelties. Likelihood-related novelties include (i) the allowance for slippage between the target rate and the onth-average rate and (ii) accounting for the nonnegativity of options prices by truncating the easureent error. In addition, we generalize the likelihood for target rates to include the likelihood for target-rate paths. Prior-related novelties include (i) enforcing nonnegativity constraints on the probabilities and (ii) the infored ignorance prior that allows for the inclusion of a large nuber of possible target rates (which is especially iportant for joint estiation). Caveats. The options are Aerican, not European. The risk-neutral easures for different horizons (different onths) are not utually copatible and consequently the proposed joint estiation is not copletely coherent. Siilarly, futures prices are used in place of forward prices. Liquidity is not taken into account in weighing the quotes. (This could enter via the uncertainty of the easureent error.) Outline of paper. Section 2 presents on overview of the odel, including the likelihood, the prior, and the posterior sapler. Section 3 introduces onth-average target rates, deals with the joint distribution of ultiple target rates (target rate paths), and outlines of how to extend the odel to include unscheduled eetings. Section 4 describes soe issues related to putting the available data into a for suitable for estiation. Section 5 discusses stochastic-process dynaics of the target-rate probabilities. Appendix A includes soe additional aterial. The epirical section of the paper is currently absent. Although a few proof of concept tests have been run, a systeatic treatent of a substantial data set is only just beginning.

3 BAYESIAN ESTIMATION OF PROBABILITIES 3 2. The big picture The goal is to estiate the risk-neutral probability distribution for the target fed funds rate fro put and call options data. Let R denote the target rate as set by the Federal Open Market Coittee (FOMC). 3 We assue the target rate is restricted to a finite set of known values R r = {r 1,..., r K }. Let the risk-neutral probabilities for the values be given by p(r = r j β) = β j, (2.1) where β = (β 1,..., β K ), β j 0 and K β j = 1. We can express the (risk-neutral) density for R as a ixture of point asses: p(r β) := β j δ(r r j ), (2.2) where δ( ) is the Dirac delta function. 4 We assue β is unknown. It is the focus of our investigation. We odel the onth-average fed funds rate S as the su of the target rate R and the slippage between S and R: S = R u. (2.3) Given (2.2) and (2.3), we can express the risk-neutral density for S as a related ixture of point asses: p(s β, u) := β j δ(s r j u). (2.4) We assue u is unknown as well. Consider a European option written on the onth-average rate S with a strike price of K. The payoff to the option at expiration is φ(k, γ, S) := ( γ (S K) ), (2.5) where x := ax(x, 0) and { γ = 1 for a call 1 for a put. (2.6) Let B denote the value of a risk-free zero-coupon bond that atures on the expiration date. The value V of the option is the present value of the expected payout coputed using the 3 See Section 3 for institutional details. 4 The ain features of δ( ) are (i) δ(x x0 ) = 0 if x x 0, (ii) δ(x x 0) dx = 1, and (iii) f(x) δ(x x0) dx = f(x0).

4 4 MARK FISHER risk-neutral probabilities: V = B = B = B φ(k, γ, S) p(s β, u) ds β j φ(k, γ, S) δ(s r j u) ds β j φ(k, γ, r j u). (2.7) Equation (2.7) suggests the task ahead: Given soe options data, fit a functional for using the basis functions φ(,, ). Assue the data are coposed of n observations, {K i, γ i, V i } n i=1, and suppose they are generated according to y i = β j φ(k i, γ i, r j u) ε i (2.8) where y i := B 1 V i is the deflated option value and ε j is a easureent error. Let X denote the n K atrix where X ij = φ(k i, γ i, r j u) = ( γ i (r j u K i ) ). (2.9) (Note that X depends on the unknown u. We will write X(u) when it is convenient to ake this dependence explicit.) We can express (2.8) as y i = X i β ε i, (2.10) where X i denotes the the i-th row of X. Stacking the n observations, we have y = Xβ ε, (2.11) where y = (y 1,..., y n ) and ε = (ε 1,..., ε n ). It is convenient to suppose ε N(0 n, σ 2 I n ). 5 We assue σ 2 is unknown. Let θ = (β, u, σ 2 ) denote the vector of unknown paraeters. The likelihood for θ follows fro (2.11) and the distribution for the easureent error: ( ) S(β, u) p(y θ) = N(y X(u)β, σ 2 I n ) = (2 π σ 2 ) n/2 exp 2 σ 2, (2.12) where 6 S(β, u) := (y X(u) β) (y X(u) β). (2.13) Reark. The novelty in (2.12) relative to Carlson et al. (2005) is to allow for u 0. 5 One drawback of this assuption is that the sapling distribution for the data p(y θ) [see (2.12)] iplies we ay observe negative option prices. This can be reedied by truncating the distribution for y. We incorporate this below. 6 A denotes the transpose of A.

5 BAYESIAN ESTIMATION OF PROBABILITIES 5 Predictive distributions. At this point it is convenient to ake a few rearks about predictive distributions, both prior and posterior, for R and S. Let p(β) and p(β y) denote the arginal prior and posterior distributions for β and let E[β j ] = β j p(β) dβ and E[β j y] = β j p(β y) dβ denote the prior and posterior expectations of β j. The prior and posterior predictive distributions for R are both ixtures of point asses where the ixture weights are the applicable expectations of β: p(r) = p(r y) = p(r β) p(β) dβ = E[β j ] δ(r s j ) (2.14) p(r β) p(β y) dβ = E[β j y] δ(r s j ). (2.15) Note that E[β y] is the only feature of the posterior distribution that atters if the goal of the inferential exercise is to copute the posterior predictive distribution for R. 7 Also note that as we approach the end of the onth, the posterior distribution for β should approach a point ass at β = I j for soe j, where I j is the j-th row the the identity atrix, so that E[β y] I j. It is instructive to exaine the predictive distributions for S as well. The prior predictive distribution for S inherits the for of the prior distribution for the slippage paraeter u. We assue prior independence between β and u: p(β, u) = p(β) p(u). The prior predictive distribution for S has the for of a ixture: where p(s) = = = p(s β, u) p(β) p(u) du dβ ( β j p(β) E[β j ] p(s j), ) δ(s s j u) p(u) dv dβ (2.16) p(s j) := p(u) u=ssj. (2.17) Therefore, if p(u) is continuous, then p(s) will be a ixture of continuous distributions with different locations. 7 In a decision-aking setting where one can either act now wait for new inforation before acting, the posterior uncertainty regarding β ay play a role.

6 6 MARK FISHER The posterior predictive distribution for S is ore coplicated owing to potential dependence between β and u in the posterior distribution: p(s y) = p(s β, u) p(β, u y) du dβ where = = If p(s j, β, y) p(s j, y), then ( β j p(β y) β j p(β y) p(s j, β, y) dβ ) δ(s s j u) p(u β, y) du dβ (2.18) p(s j, β, y) = p(u β, y) u=ssj. (2.19) p(s y) k E[β j y] p(s j, y). (2.20) As we approach the end of the onth, the posterior predictive distribution should collapse to a point ass on the actual onth-average funds rate. The prior. It reains to adopt a prior distribution for the unknown paraeters θ = (β, u, σ 2 ). We assue prior independence: p(θ) = p(β) p(u) p(σ 2 ). For σ 2 we adopt the Jeffreys prior: p(σ 2 ) 1/σ 2. Here we provide soe introductory rearks regarding the prior for u. Typically we will adopt a syetric prior for u, centered on zero. They ay be ties, however, when we will center it elsewhere (the beginning of the financial crisis, for exaple, when the onthaverage rate dropped significantly with no change in the stated target rate). The for of the prior for u should allow the scale to be chosen endogenously. One could for exaple adopts a Gaussian prior for u, perhaps a ixture to allow for differing regies. Now we focus on the prior for β. A benefit of the Bayesian approach is the ability to incorporate iportant considerations via the prior for β. In particular, the prior for β should ebody two features. First the prior should ensure the constraint β K1, where K1 is the (K 1)-diensional siplex. Second the prior should be capable of expressing the idea that while any target rates are possible, only a few should have nontrivial probability. The Dirichlet distribution ebodies both of these features. Let p(β α, ξ) = Γ(α) K Γ(α ξ j) K β α ξ j 1 j, (2.21) where α > 0 and ξ K1. Note E[β α, ξ] = ξ. We will refer to α as the concentration paraeter. If ξ j = 1/K and α = K, then the prior is flat: p(β α, ξ) = (K 1)!. As K get large relative to n, the flat prior for β tends to doinate the likelihood (2.12) and the posterior can becoe quite flat itself. Setting α < K encourages parsiony, pushing the ass of the prior toward the vertices of the siplex, thereby iplicitly suggesting that

7 BAYESIAN ESTIMATION OF PROBABILITIES 7 only a few of the coponents are nonnegligible. (This prior could be described as partially infored ignorance. ) On the other hand, if ξ is well-infored (because, for exaple, it is based on closelyrelated data) then α > K ay be suitable. Below we discuss incorporating uncertainty about α via a prior. The posterior sapler. The ain cost of the Bayesian approach is drawing fro the posterior distribution when an analytical solution is absent (as it is here). The posterior distribution for the paraeters can be expressed as p(θ y) p(y θ) p(β α, ξ) p(u)/σ 2, (2.22) where θ = (β, u, σ 2 ). We take a Gibbs sapler approach, applying Metropolis Hastings where necessary within. Drawing fro the posterior for σ 2 (conditional on β and u) is straightforward. Note σ 2 y, β, u Inv-χ 2 (ν, s 2 ), where ν = n and s 2 = S(β, u)/n. 8 The posterior distribution for the slippage paraeter u conditional on (β, σ 2 ) can be expressed as ( ) S(β, u) p(u y, β, σ 2 ) exp 2 σ 2 p(u). (2.23) We can ake draws fro p(u y, β, σ 2 ) using a Metropolis schee. We now turn to drawing fro the posterior for β (conditional on σ 2 and u). First we show how to draw fro the posterior assuing the prior for β is flat; then we show how to draw fro the posterior with the ore general prior. A technical detail. Since K β j = 1, the conditional distribution β j β j is degenerate: the value of β j is fixed by β j. By reoving one of the coponents, say β k, we can then cycle through the reaining K 1 coponents via a one-at-a-tie Gibbs sapler. However, the (average) agnitude of β k will affect the efficiency of the sapler. Let β j βj k denote conditioning on β j \ {β k }. If β k is close to zero, we will find ourselves back in the previous trap with little or no wiggle roo for β j βj k. Therefore, for each sweep of the Gibbs sapler we will reove the largest coponent fro the previous sweep and saple over the reaining coponents. Back to the ain thread. If the prior for β were flat, then the posterior for β (conditional on σ 2 ) would be a ultivariate noral distribution truncated to the siplex. This truncated noral distribution can be hard to draw fro when only a sall fraction of the ass of the unrestricted distribution is contained in the siplex. Moreover, with K large relative to n, X X will be singular and hence not invertible. Nevertheless, a one-at-a-tie Gibbs sapler works well. The posterior distribution for β i βi k is a univariate truncated noral distribution: where p(β i y, β k i, σ 2, u) = N [0,b k i ] (β i i, s 2 i ), (2.24) b k i = β i β k (2.25) 8 If the prior for σ 2 were different, we could use this as a proposal for a Metropolis step.

8 8 MARK FISHER and ( i, s 2 i ) can be readily calculated. In particular, note S(β, u) = c i0 c i1 β i c i2 βi 2 for soe coefficients (c i0, c i1, c i2 ) that are functions of (βi k, X(u), y). Consequently, i = c i2 /(2 c i3 ) and s 2 i = σ 2 /c i3. Draws fro the univariate truncated noral can be ade using the inverse CDF ethod (for exaple). We can accoodate the general Dirichlet prior for β by incorporating a Metropolis Hastings step. We use the truncated noral distribution in (2.24) to draw the proposal β i. Because the proposal is a draw fro the conditional posterior given a flat prior, the Hastings ratio equals the inverse of the likelihood ratio, q(β, β ) q(β, β) = p(y β, u, σ2 ) p(y β, u, σ 2 ). (2.26) Consequently, the acceptance condition depends only on the prior ratio: p(β ( ) α, ξ) β α ξi 1 ( p(β α, ξ) = i β ) α ξk 1 k u, (2.27) β i β k where u U(0, 1) and where β k = bk i β i. Note that if the prior for β were flat, then (of course) the proposal would always be accepted. Uncertainty about α. The hyperparaeter α plays an iportant role in the prior for β. Instead of specifying a fixed value for α, we can adopt a prior for α and incorporate an additional Metropolis step to saple α. The likelihood for α is given by (2.21). Let the prior for α be given by 9 p(α ζ, τ) = α 1 τ 1 ζ 1 τ ( ) τ α 1 τ ζ 1 2, (2.28) τ where ζ, τ > 0. The ean of the distribution does not exist. We can see how to interpret the paraeters via the quantile function where α = Q(x) for x (0, 1) and ( ) x τ Q(x) = ζ. (2.29) 1 x Thus ζ is the edian and τ deterines the spread. For our purpose, it is natural to choose ζ = K. 10 For aking draws fro the posterior, it is convenient to change variables: let z = log(α). Given (2.28), the prior for z is given by p(z ζ, τ) = ζ 1/τ e z/τ (ζ 1/τ e z/τ ) 2 τ, (2.30) 9 This is a special case of the Singh Maddala distribution. 10 The essence of this prior is captured by the following change of variables: Let η := log(α/k) and let ζ = K; then p(η τ) = e η/τ (1 e η/τ ) 2 τ, which is syetric around zero with a standard deviation of (π/ 3) τ.

9 BAYESIAN ESTIMATION OF PROBABILITIES 9 where p(z ζ, τ) is syetric around its the ean of log(ζ) with a standard deviation of (π/ 3) τ. The Metropolis step proceeds as follows: Given soe average step-size s, ake rando-walk proposals of z N(z, s 2 ) and accept z if p ( β α, ξ ) p(z ζ, τ) p ( β α, ξ ) u, (2.31) p(z ζ, τ) where u U(0, 1) and where α = e z and α = e z. A typical setting for the hyperparaeters is be ξ = (1/K,..., 1/K), ζ = K, and τ = 2. However, in going fro one observation day to the next, it ay ake sense to use ξ t = E[β t1 y t1 ]. (Liited experience indicates that the posterior distribution for α will be quite different for these two priors.) The nonnegativity of options prices. A proble with the likelihood (2.12) is that it ay iply Pr[y i < 0 β, u, σ 2 ] 0. We believe one would never observe such out-of-bounds prices. (Such an observation would be discarded as a data istake and not interpreted as the result of easureent error as we have construed it.) One way to fix this iplication is to truncate the sapling distribution for the data at zero; this has the effect of ibuing the easureent error with an upward bias, where the bias gets larger as the option price gets closer to zero. The likelihood (for a single observation) that does not take into account the positivity of option prices can be expressed as p(y i θ) = N(y i µ i, σ 2 ) where µ i := X i (u)β. By coparison, consider the likelihood given a truncated Gaussian distribution: p(y i θ) := 1 [0, ) (y i ) H(µ i, σ 2 ) N(y i µ i, σ 2 ), (2.32) where 11 { ( )} µ 1 H(µ, σ 2 ) := 1 Φ. (2.33) σ Note 0 p(y i θ) dy i = 1 and Ê[y i θ] = 0 y i p(y i θ) dy i = µ i σ 2 H(µ i, σ 2 ) N(0 µ i, σ 2 ) > µ i. (2.34) The inequality in (2.34) can be interpreted as an upward bias in the easureent error. This upward bias is largest at µ i = 0 and declines onotonically to zero as µ i. Let us now consider the likelihood of the entire data set: n p(y θ) := p(y i θ) = 1 [0, ) n(y) C(θ) p(y θ), (2.35) i=1 where p(y θ) = n i=1 p(y i θ) is given in (2.12) and ( ) 1 C(θ) := p(y θ) dy = [0, ) n n H ( µ i, σ 2). (2.36) Note, for fixed σ, H(µ, σ 2 ) declines onotonically fro H(0, σ 2 ) = 2 to li µ H(µ, σ 2 ) = 1. Therefore 1 C(θ) 2 n. 11 Φ( ) is the standard noral cuulative distribution function (CDF). i=1

10 10 MARK FISHER Let M [0, ) n denote the odel where y [0, ) n. Assuing the data are all nonnegative, the likelihood of this odel is p(y M [0, ) n) = C(θ) p(y θ) p(θ) dθ. (2.37) Thus, the Bayes factor in favor of this odel (relative to the unrestricted odel) is C(θ) p(y θ) p(θ) dθ BF = = E p(θ y) [C(θ)], (2.38) p(y θ) p(θ) dθ where the expectation is taken with respect to the posterior distribution of θ fro the unrestricted odel. Given draws of {θ (r) } R r=1 fro the posterior of the unrestricted odel, it follows fro the right-hand side of (2.38) that we can copute the Bayes factor as R C ( θ (r)) = BF. (2.39) li R R1 r=1 In addition, the weights z (r) C ( θ (r)), where R r=1 z(r) = 1, can be used to resaple the draws. If the weights do not vary very uch, then the two odels will predict siilar distributions for the paraeters even if the Bayes factor is large. The upshot is that proceed by first estiating without the option-price nonnegativity restriction and then to check afterwards to see how the restriction changes the inferences. (Liited experience suggests the restriction does not change the inferences noticeably.) 3. Month-average target rates, target-rate paths, and unscheduled eetings The target fed funds rate is set by the Federal Open Market Coittee (FOMC) at its scheduled eetings. There are eight scheduled eetings per year, so there are four onths each year without a (scheduled) eeting. (Occasionally it is changed at unscheduled eetings. We will deal with this possibility in the Appendix.) It is set in increents of 25 basis points (0.25 percent). 12 [Currently, however, the target rate is a band: 0 to.25%.] Consider a given eeting. We adopt the setup outlined in the previous section for the target rate R where there are K possibilities, r = (r 1,..., r K ), and the risk-neutral probability is p(r = r j β) = β j. We say that r is contiguous if r j = r % (j 1) for j = 1,..., K. 13 We will often assue the odel is contiguous. Now consider a given onth. The fed funds futures contract depends on the onthly average of fed funds rates. (For exaple, a Friday rate counts three days in the average, for Friday, Saturday, and Sunday. If the funds arket were closed the following Monday due to a holiday, then the preceding Friday rate would count 4 days, assuing all four days were in the sae onth.) Up to this point, we have iplicitly assued a single target applies for a given onth. In general, however, the situation is ore coplicated. More often than not, there is a scheduled eeting within a given onth. Prior to the eeting the previously established 12 Cite soe history % =

11 BAYESIAN ESTIMATION OF PROBABILITIES 11 target rate applies, while after eeting a new (possible different) applies. Assuing the Open Market Desk keeps the daily funds rate at the appropriate target rate throughout the onth, then the onth-average funds rate will be R = (1 ω) R 1 ω R 2, where ω is the fraction of the days in the onth for which R 2 is the target rate. If there were no eetings between the observation date and the eeting in the onth in question, then R 1 can be identified with the current rate (assuing the absence of unscheduled eetings). More generally, both R 1 and R 2 will be unknown on the observation date and consequently the joint distribution of the two target rates coe into play. In addition, the daily effective funds rate does not equal to the target rate on a daily basis. More iportantly for our purposes, the onth-average funds rate S does not equal the onth-average target rate R. We account for this difference by letting S = Ru, where v is the onth-average slippage. Notation. Let τ = (τ 1,..., τ w ) denote a sequence of eeting dates and let R τi denote the target rate established at tie τ i. Let T 0 denote the first day of onth. Let τ denote the date of the last eeting prior to the beginning of onth and let τ denote the date of the following eeting: τ < T 0 τ. Let ω denote the fraction of onth fro τ to the beginning of the onth 1: { T 0 ω = ax 1 τ } T1 0 T 0, 0 (3.1) In particular, if τ = T 0 then ω = 1 and if τ T 0 1 then ω = 0. The onth-average target rate for onth is R = (1 ω ) R τ ω R τ. (3.2) Let t denote the quote date (also known as the observation date). If t < τ i then R τi is unknown while if t τ i then R τi is known. Note τ1 is the date of the last eeting prior to the beginning of onth 1. If onth has a eeting then τ1 occurs in onth ; otherwise τ1 occurs in an earlier onth. If t τ 1, then R is known. If t < τ1, then R is unknown. If t < τ and there is a eeting during onth, the R involves two unknowns: R τ and R τ. A nuber of special cases. Now we consider a nuber of special cases of (3.2). In each case we show how the likelihood of an observation can be expressed as y i = X i β ε i, thereby allowing us to adopt both the likelihood and the prior described in Section 2. Case 1. First consider a onth that involves a single target rate. This occurs if either ω = 0 (a onth with no eeting) or ω = 1 (a onth with a eeting on the first day of the onth). In either case, the onth-average target rate involves a single target rate: { R R = τ ω = 0 R τ ω = 1. (3.3) Let us refer to this rate as R. Let R r = {r 1,..., r K }, p(r = r j β) = β j, S = R u, and X ij = φ(k i, γ i, r j u). We have established y i = X i β ε i.

12 12 MARK FISHER Case 2. Next suppose ω (0, 1) but that R τ is known. (Thus the onth contains a single unknown target rate.) Let us refer to R τ and R τ as R 0 and R (respectively) so that R = (1 ω ) R 0 ω R. Let R r = {r 1,..., r K }, p(r = r j β) = β j, S = R u, and X ij = φ(k i, γ i, r j u) where r j = (1 ω ) R 0 ω r j. Again, we have established y i = X i β ε i (in which r j has taken the place of r j ). In passing, note that the saller ω is, the less inforation there will be regarding R. Case 3. Now consider (3.2) ore generally. Let us refer to R τ and R τ as R 1 and R 2, respectively, so that R = (1 ω ) R 1 ω R 2. Assue (R 1, R 2 ) r 1 r 2 where r 1 = (r 1 1,..., r1 K 1 ) and r 2 = (r 2 1,..., r2 K 2 ). Let b denote a K 1 K 2 atrix of joint probabilities such that where b kl 0 and K 1 row-ajor order) k=1 p(r 1 = rk 1, R 2 = rl 2 b) = b kl (3.4) K2 l=1 b kl = 1. Let β denote a vectorized version of b, where (using β j = b kl for j = (k 1) K 2 l. (3.5) Thus β K 1 where K = K 1 K 2. Assue S = R u and let x ikl = φ(k i, γ i, r kl u) where Then r kl = (1 ω ) r 1 k ω r 2 l. (3.6) y i = K 1 k=1 K 2 l=1 x ikl b kl ε i. (3.7) Let X i denote the vectorized version of x i. Then (3.7) can be expressed as y i = X i β ε i. Stacking the observations produces y = Xβ ε. The atrix X has n rows/observations and K coluns/paths and the vector β has K eleents/joint probabilities. There are a nuber of factors that ilitate against the ability to identify the probabilities in this case: the (potentially) large nuber of unknown probabilities relative to the nuber of observations, the (potentially) close spacings of the states s kl (relative to the agnitude of the slippage and the easureent error), the possibility that ω is near either zero or one. One possible way to overcoe these probles is to use inforation fro additional contract onths. We now turn to that approach. Two contract onths at once. Suppose there is no eeting in onth (ω = 0) and there is a eeting in onth 1. Let t < τ so that both R = R τ and R 1 = (1 ω 1 ) R τ ω 1 R 1 τ are unknown. The onth-average rates share a coon 1 target rate: R τ = R 1 τ. We refer to R τ and R τ as R 1 and R 2, respectively, and we adopt the previous setup (including r 1, r 2, b, and its vectorization β). In addition, we refer to the onths and 1 as onths 1 and 2, respectively. Thus, R 1 = R 1 and R 2 = (1 α 2 ) R 1 α 2 R 2.

13 BAYESIAN ESTIMATION OF PROBABILITIES 13 Assue each onth has its own slippage: S = R u, for = 1, 2. Let x ikl = φ(k i, γ i, s kl u ), where s 1 kl = r1 k (3.8) s 2 kl = (1 ω 2) r 1 k ω 2 r 2 l. (3.9) (Note the atrix s 1 is coposed of K 2 copies of the vector r 1.) Let Xi denote the vectorized version of x ikl. Then we have yi = Xi β ε i. (3.10) Let Y = [ ] [ ] [ ] y 1 X 1 ε 1 y 2, X = X 2, and E = ε 2. (3.11) We can express the joint likelihood as Y = X β E, where E N(0 n, σ 2 I n ) and where n = n 1 n 2. Consequently, we can use the posterior sapler described above. 14 This approach can be generalized to include ore than two onths where β is a vectorized version of the tensor of joint probabilities and β K 1, where K = u K u. (See Appendix A for an exaple involving three contract onths.) Reark. Recall that α is the concentration paraeter in the prior for β. Note that if α < K ax{k u } then there is no restricted odel that supports independence aong the probabilities. 15 In effect, a sufficient sall value of α for the joint probabilities rules out independence. Unscheduled eetings. In order to keep the analysis siple, consider a onth with no scheduled eeting. Let R 1 denote the applicable target rate absent the unscheduled eeting and let R 2 denote the target rate set at the unscheduled eeting (if it occurs). Note that R 2 R 1. Let the onth-average target rate be given by R = (1 ω) R 1 ω R 2. (3.12) The novelty is that ω is unknown. If ω = 0, there is no unscheduled eeting, while if 0 < ω 1 there is an unscheduled eeting. The distribution for ω is discrete because there are a finite nuber of days in any onth. (The values of ω with positive probability are further restricted by the fact that unscheduled eetings occur on business days in the sense that the announceent of a new target rate occurs on business days.) To treat the siplest case, suppose R 1 is known, in which case r j = (1 ω) R 1 ω rj 2 (where r j = R 1 if and only if ω = 0) and X ij = φ(k i, γ i, r j u). Note the X atrix depends on the unknown ω (as well as u). The (conditional) posterior distribution for ω is discrete where ( ) S(β, u, ω) p(ω β, u, σ 2, y) exp 2 σ 2 p(ω). (3.13) We can think of (3.12) as cobining two copeting odels: the no-unscheduled-eeting odel N characterized by ω = 0 and the unscheduled-eeting odel U characterized by 14 See Appendix A for an alternative forulation in ters of arginal and conditional probabilities, with an illustration assuing ω 2 = At least no siple odel that I can see.

14 14 MARK FISHER ω 0. The Bayes factor in favor of odel N relative to odel U is given by the posterior odds ratio divided by the prior odds ratio: BF N U = p(ω = 0 y) p(ω = 0) p(ω 0 y) p(ω 0). (3.14) Let the prior distribution p(ω q) be given by p(ω = 0 q) = q and p(ω 0 q) = 1q, where 1 q is distributed equally over the reaining possibilities. Suppose the hyperparaeter q has a prior distribution, p(q). Then p(ω = 0) = p(ω = 0 q) p(q) dq = E[q] and p(ω = 0 y) = p(ω q) p(q y) dq = E[q y]. For exaple, suppose the prior for q is a beta distribution paraeterized as follows: p(q) = B ( q α q ξ q, α q (1 ξ q ) ), (3.15) where ξ q = E[q] and α q is the concentration paraeter. The flat prior is given by ξ q = 1/2 and α q = 1. If in fact we choose ξ q = 1/2, then BF N U = E[q y] 1 E[q y]. (3.16) However, it probably akes ore sense to choose ξ q to atch the unconditional probability that a onth has no unscheduled eeting. If we choose α q 1, then the prior expresses the following view: On the one hand it is probably very unlikely that there will be an unscheduled eeting, but on the other it is possible that it is very likely. 4. Data Here we describe the data and what is required to put it in a for suitable for estiation. For each quote date there are quotes for puts and calls on a nuber of contracts. The strike prices are expressed in ters of an index P, where P = 100 (1 S). 16 For exaple, P = 97 refers to a onth-average funds rate of S = 1.97 =.03. Note that the payoff at expiration to a call option in ters of P translates into the payoff of a put in ters of S. To see this, let K P = 100 (1 K S ) denote the two expressions of a given strike price and note (P K P ) = ( 100 (1 S) 100 (1 K S ) ) = 100 (KS S). (4.1) Siilarly, put payoffs expressed in ters of P becoe call payoffs expressed in ters of S. Hereafter, assue the data are expressed in ters of S. Let t denote the quote date and denote the contract onth. The data for a given quote date include quotes for a total of M t contract onths, and for each contract onth are in the for of n t triples: {(K ti, γ ti, V ti )}n t i=1. We deflate the option values: y ti = V it /B t for i = 1,..., n t. The result is a data set for the quote date t of the for {(K ti, γ ti, y ti )}n t i=1 for = 1,..., M t. It reains to copute the appropriate X atrices fro the strike prices {K ti }. We proceed as follows. (We will suppress the dependence on the quote date occasionally to reduce the notational clutter.) First, for each contract onth in the data set we need to 16 Because the relation between S and P is linear, there are no issues relating to Jensen s inequality in changing back and forth between the two representations.

15 BAYESIAN ESTIMATION OF PROBABILITIES 15 find α, R τ and R τ in order to for R as given in (3.2). Next copute the union of the unknown rates across entire collection of onths. Let R 0 denote the known target rate as of tie t and let {R u } Ut u=1 denote the set of unknown rates that appear in the onthaverage rates. (At this point, soe discretion on the part of the researcher coes into play.) For each unknown rate, specify the set of possible values: R u r u = {r1 u,..., ru K u }. Let r 0 = {R 0 }. It would probably ake sense to have r 0 r 1... r u. The total nuber of target-rate paths is Kt = U t u=1 K u. If all of the data were to be used in a single joint estiation, the X atrix would have n t = M t =1 n t rows/observations and Kt coluns/paths. It ay not be possible to learn uch for such a odel, given the nuber of observations and the nuber of paths. Instead a subset of the onths and a corresponding subset of the target rates ay be used. Given these subsets, let Xij = φ(k i, γ i, s j u ), where the index j = 1..., K is coputed according the vectorization of the join probability tensor for the given subset (and K now denotes the product coputed fro the subset). The data are now in a for suitable for estiation. Additional thoughts. We will want to copare futures prices (as proxies for forward prices) with E[S y] coputed fro the posterior predictive distribution [see (2.18)]. We ay want to use the futures prices to help for the prior for β. We can use put call parity to assess the agnitude of the easureent error. Questions for e. What is the expiration day if the last day of the onth falls on a nonbusiness day? Does the funds rate for a given day include trades after the FOMC announceent. What is the effective funds rate on eeting days, both with and without target rate changes. Use this inforation to decide whether to include the eeting day with the previous target rate or with the next target rate. 5. Dynaics We can extend this approach to incorporate day-to-day dynaics. The idea here is that β is a stochastic process that evolves through tie. Since β is a vector of probabilities, it ust be a artingale, where 17 E t1 [β t ] = β t1. Let p(β t β t1 ) denote the pdf for β t given the paraeter β t1 and let Y t1 := (y 1,..., y t1 ). Then p(β t y t, Y t1 ) p(y t β t ) p(β t Y t1 ) = p(y t β t ) p(β t β t1 ) p(β t1 Y t1 ). (5.1) There are a nuber of ways to copute the updating in (5.1). The particle filter is discussed below (as an illustrative exaple). The central idea here is that the probabilities on a given day (for a fixed set of target rates) will be closely related to the probabilities on the preceding and succeeding days. One consequence is that ore efficient estiation could use the inforation in the data fro the neighboring days (y t1, y t1 ). Another consequence is that if we wish to infer whether the probabilities have changed fro one day to the next, we ust copute their joint distribution p(β t1, β t ). 17 Warning: The notation in this section ay conflict with that in other sections.

16 16 MARK FISHER Particle filter. The updating in (5.1) can be coputed using a particle filter. In particular, the draws fro p(β t1 Y t1 ) constitute a particle swar. There are a nuber of related ways to ipleent a particle filter. The ost straightforward is the sapling iportance resapling (SIR) approach. Following this approach, ake a rando draw fro p(β t β (i) t1 ) for each β(i) t1 in the swar. Denote these draws {β (i) t }. These draws represent draws fro p(β t Y t1 ). At this point, iportance sapling coes in to play. Copute w (i) t := p(y t β (i) t ) and resaple {β (i) t } using these weights. A proble one encounters is when the bulk of p(y t β t ) is located in the tail of p(β t Y t1 ) then the weights {w (i) t } ay be doinated by a few large values with the result that the swar provides a poor representation of the posterior. An alternative approach is the independent particle filter (IPF). Make draws {β (j) t } fro the likelihood p(y t β t ) assuing it is proper. 18 We need to copute the weights p(β (j) t Y t1 ) = p(β (j) t β t1 ) p(β t1 Y t1 ). The factor p(β t1 Y t1 ) is encoded in the posterior draws fro the previous day. 19 For each draw fro the second day, choose L draws fro the first day, where 1 L and is the nuber of draws fro the first day. Then copute z (i,j) t = p(β (j) t β (i) t1 ). Let z (j) t = L i=1 z(i,j) t and use these weights to resaple {β (j) t }. (By sapling fro p(β t1 Y t1 ), we incorporate its weight iplicitly.) We can tweak the IPF by drawing fro p(y t β t ) p(β t ) where p(β t ) is the axiu entropy distribution with ean = 1 M M i=1 β(i) t1. (We could use the Dirichlet distribution as well.) This will tend to reduce the variation of the weights coputed as p(β t β t1 )/p(β t ). In order to obtain the joint distribution p(β t1, β t Y t1, y t ), we can apply the so-called soothing step. We do this as follows. Make a draw fro {β (j) t } and conditional on that draw, ake a draw fro {β (i) t1 } i=1 using the weights {z(i,j) t } i=1 (fixing the j drawn). The pairs thus generated are fro the joint distribution conditional on (Y t1, y t ). The arginal distribution for β t1 encoded in these draws reflects the data fro both days. The coputation of {z (i,j) t } can be quite expensive. One can reduce the expense by discretizing by binning and coputing p(β (j) t β (i) t1 ) at the idpoints of the bins and using the bin counts to adjust. Let z (i,j) t := p(β (j) t β (i) t1 ) n(i,j) t where p(β (j) t β (i) t1 ) is coputed at the idpoint of bin (i, j) and n (i,j) t denotes the corresponding bin count. Then z (j) t := i z(i,j) t. Appendix A. Addition aterial Marginal and conditional target-rate probabilities. Given the joint probabilities (3.4), we can copute arginal probabilities p(r 1 = r 1 k b) = K 2 l=1 b kl = β 1 k and p(r 2 = r 2 l b) = 18 The draws of βt are ade independent of β t1. For ore on the IPF, see Lin et al. (2005). 19 One way to proceed is to take a draw fro p(βt β (i) t1 ) for each i fro the first day and then approxiate the density.

17 K1 k=1 b kl = β 2 l BAYESIAN ESTIMATION OF PROBABILITIES 17 and conditional probabilities p(r 2 = r 2 l R 1 = r 1 k, b) = p(r 1 = r 1 k, R 2 = r 2 l b) p(r 1 = r 1 k b) = b kl K2 l=1 b kl =: B kl. (A.1) Note β 2 = B β 1. Also note p(r 1 = rk 1, R 2 = rl 2 B, β1 ) = B kl βk 1. As an alternative to odeling the joint probabilities, one could odel arginal and conditional probabilities. If we choose to odel β 1 and B (instead of odeling b), the nuber of paraeters reains K 1 K 2 1, since K 1 k=1 β1 k = 1 and K 2 l=1 B kl = 1 for k = 1,..., K 1. (Each of the K 1 rows of B sus to one.) We could adopt Dirichlet priors for β 1 and for the coluns of B. This approach, however, offers no advantages in general relative to the approach of odeling the joint probabilities. In order to illustrate this approach, let us odify the exaple involving two onths by setting ω 2 = 1. In this case each onth is associated with a single target rate and consequently [ ] [ ] [ ] X 1 X1 0 β 1 X 2 β = 0 X2 β 2 (A.2) subject to β 2 = B β 1, where X ij = φ(k i, γ i, r j u ) (A.3) for rj r. Carlson et al. (2005) provide an exaple of this approach (with u = 0 in their setup). In the exaple (p. 1214), K 1 = 2 and K 2 = 3. Consequently there are K = K 1 K 2 = 6 possible paths and K 1 = 5 independent probabilities. Using a prior inforation, they set three path probabilities to zero, leaving three paths and two independent probabilities reaining. The restrictions produce the following atrix of conditional probabilities: 20 [ ] B B =, (A.4) 0 B 22 B 23 so that β 2 = B β 1 = B 11 β 1 B 22 β2 1. B 23 β2 1 (A.5) Since each row of B ust su to one, we have B 11 = 1 and B 22 B 23 = 1. In addition β1 1 β1 2 = 1. (Together these restrictions iply β2 1 β2 2 β2 3 = 1.) This leaves a total of two free paraeters. 20 The atrix of joint probabilities is [ ] b = B β 1 B11 β = 0 B 22 β2 1 B 23 β2 1, where (row-by-row) b k = B k β 1 k.

18 18 MARK FISHER An exaple involving three onths. Here we provide an exaple involving three onths. Let R = (1 ω ) R τ ω R τ (A.6a) R 1 = (1 ω 1 ) R τ ω 1 R 1 τ 1 (A.6b) R 2 = (1 ω 2 ) R τ ω 2 R 2 τ. 2 (A.6c) If ω 1 = 1, then there is no rate in coon between R and R 1. Siilarly, if ω 2 = 1, then there is no rate in coon between R 1 and R 2. Assuing ω 1 < 1, { R R τ = τ ω = 0 (A.7) 1 R τ ω > 0 and assuing ω 2 1, R τ = 2 { Rτ 1 R τ 1 Suppose ω, ω 1 (0, 1) and ω 2 = 0 so that ω 1 = 0 ω 1 > 0. R = (1 ω ) R 1 ω R 2 R 1 = (1 ω 1 ) R 2 ω 1 R 3 R 2 = R 3, (A.8) (A.9) (A.10) (A.11) where (R 1, R 2, R 3 ) represent the three distinct rates. Let r = (r 1,..., r K ) for = 1, 2, 3 and let p(r 1 = r 1 k, R 2 = r 2 l, R 3 = r 3 w B) = B klw, where B is a rank-3 tensor of joint probabilities with diensions K 1 K 2 K 3. Let β denote the vectorized version B: β j = B klw for j = (k 1) K 2 K 3 (l 1) K 3 w. (A.12) Note β K 1, where K = K 1 K 2 K 3. Assue S = R u. Let x iklw = φ(k i, γ i, s klw u ), where s 1 klw = (1 ω ) r 1 k ω r 2 l s 2 klw = (1 ω 1) r 2 l ω 1 r 3 w s 3 klw = r3 l. (A.13) (A.14) (A.15) Then we have yi = Xi β ε i (A.16) where X i is a vectorized version of the tensor x i. Again we can express the joint likelihood as Y = X β E, where E N(0 n, σ 2 I n ) and where n = n 1 n 2 n 3. References Carlson, J. B., B. R. Craig, and W. R. Mellick (2005). Recovering arket expectations of FOMC rate changes with options on federal funds futures. Journal of Futures Markets 25, Geweke, J. (1986). Exact inference in the inequality constrained constrained noral linear regression odel. Journal of Applied Econoetrics 1,

19 BAYESIAN ESTIMATION OF PROBABILITIES 19 Lin, M. T., J. L. Zhang, Q. Cheng, and R. Chen (2005). Independent particle filters. Journal of the Aerican Statistical Soceity 100, Federal Reserve Bank of Atlanta, 1000 Peachtree St. N.E., Atlanta, GA 30309

SIMPLEX REGRESSION MARK FISHER

SIMPLEX REGRESSION MARK FISHER SIMPLEX REGRESSION MARK FISHER Abstract. This note characterizes a class of regression models where the set of coefficients is restricted to the simplex (i.e., the coefficients are nonnegative and sum

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

Estimating Parameters for a Gaussian pdf

Estimating Parameters for a Gaussian pdf Pattern Recognition and achine Learning Jaes L. Crowley ENSIAG 3 IS First Seester 00/0 Lesson 5 7 Noveber 00 Contents Estiating Paraeters for a Gaussian pdf Notation... The Pattern Recognition Proble...3

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians

Using EM To Estimate A Probablity Density With A Mixture Of Gaussians Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October

More information

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS

DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS DERIVING PROPER UNIFORM PRIORS FOR REGRESSION COEFFICIENTS N. van Erp and P. van Gelder Structural Hydraulic and Probabilistic Design, TU Delft Delft, The Netherlands Abstract. In probles of odel coparison

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search Quantu algoriths (CO 781, Winter 2008) Prof Andrew Childs, University of Waterloo LECTURE 15: Unstructured search and spatial search ow we begin to discuss applications of quantu walks to search algoriths

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

Bayes Decision Rule and Naïve Bayes Classifier

Bayes Decision Rule and Naïve Bayes Classifier Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.

More information

Bootstrapping Dependent Data

Bootstrapping Dependent Data Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

Polygonal Designs: Existence and Construction

Polygonal Designs: Existence and Construction Polygonal Designs: Existence and Construction John Hegean Departent of Matheatics, Stanford University, Stanford, CA 9405 Jeff Langford Departent of Matheatics, Drake University, Des Moines, IA 5011 G

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution

Keywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality

More information

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines

Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels

Extension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

An Approximate Model for the Theoretical Prediction of the Velocity Increase in the Intermediate Ballistics Period

An Approximate Model for the Theoretical Prediction of the Velocity Increase in the Intermediate Ballistics Period An Approxiate Model for the Theoretical Prediction of the Velocity... 77 Central European Journal of Energetic Materials, 205, 2(), 77-88 ISSN 2353-843 An Approxiate Model for the Theoretical Prediction

More information

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are,

are equal to zero, where, q = p 1. For each gene j, the pairwise null and alternative hypotheses are, Page of 8 Suppleentary Materials: A ultiple testing procedure for ulti-diensional pairwise coparisons with application to gene expression studies Anjana Grandhi, Wenge Guo, Shyaal D. Peddada S Notations

More information

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact

Physically Based Modeling CS Notes Spring 1997 Particle Collision and Contact Physically Based Modeling CS 15-863 Notes Spring 1997 Particle Collision and Contact 1 Collisions with Springs Suppose we wanted to ipleent a particle siulator with a floor : a solid horizontal plane which

More information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information

Inspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub

More information

Machine Learning Basics: Estimators, Bias and Variance

Machine Learning Basics: Estimators, Bias and Variance Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

Analyzing Simulation Results

Analyzing Simulation Results Analyzing Siulation Results Dr. John Mellor-Cruey Departent of Coputer Science Rice University johnc@cs.rice.edu COMP 528 Lecture 20 31 March 2005 Topics for Today Model verification Model validation Transient

More information

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES

Proc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters journal of ultivariate analysis 58, 96106 (1996) article no. 0041 The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Paraeters H. S. Steyn

More information

Topic 5a Introduction to Curve Fitting & Linear Regression

Topic 5a Introduction to Curve Fitting & Linear Regression /7/08 Course Instructor Dr. Rayond C. Rup Oice: A 337 Phone: (95) 747 6958 E ail: rcrup@utep.edu opic 5a Introduction to Curve Fitting & Linear Regression EE 4386/530 Coputational ethods in EE Outline

More information

PAC-Bayes Analysis Of Maximum Entropy Learning

PAC-Bayes Analysis Of Maximum Entropy Learning PAC-Bayes Analysis Of Maxiu Entropy Learning John Shawe-Taylor and David R. Hardoon Centre for Coputational Statistics and Machine Learning Departent of Coputer Science University College London, UK, WC1E

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Ph 20.3 Numerical Solution of Ordinary Differential Equations

Ph 20.3 Numerical Solution of Ordinary Differential Equations Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing

More information

Non-Parametric Non-Line-of-Sight Identification 1

Non-Parametric Non-Line-of-Sight Identification 1 Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,

More information

OBJECTIVES INTRODUCTION

OBJECTIVES INTRODUCTION M7 Chapter 3 Section 1 OBJECTIVES Suarize data using easures of central tendency, such as the ean, edian, ode, and idrange. Describe data using the easures of variation, such as the range, variance, and

More information

Biostatistics Department Technical Report

Biostatistics Department Technical Report Biostatistics Departent Technical Report BST006-00 Estiation of Prevalence by Pool Screening With Equal Sized Pools and a egative Binoial Sapling Model Charles R. Katholi, Ph.D. Eeritus Professor Departent

More information

Example A1: Preparation of a Calibration Standard

Example A1: Preparation of a Calibration Standard Suary Goal A calibration standard is prepared fro a high purity etal (cadiu) with a concentration of ca.1000 g l -1. Measureent procedure The surface of the high purity etal is cleaned to reove any etal-oxide

More information

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007

Deflation of the I-O Series Some Technical Aspects. Giorgio Rampa University of Genoa April 2007 Deflation of the I-O Series 1959-2. Soe Technical Aspects Giorgio Rapa University of Genoa g.rapa@unige.it April 27 1. Introduction The nuber of sectors is 42 for the period 1965-2 and 38 for the initial

More information

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES

TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,

More information

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all

2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either

More information

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis

Experimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna

More information

Bayesian Approach for Fatigue Life Prediction from Field Inspection

Bayesian Approach for Fatigue Life Prediction from Field Inspection Bayesian Approach for Fatigue Life Prediction fro Field Inspection Dawn An and Jooho Choi School of Aerospace & Mechanical Engineering, Korea Aerospace University, Goyang, Seoul, Korea Srira Pattabhiraan

More information

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis

Soft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES

More information

Lecture 12: Ensemble Methods. Introduction. Weighted Majority. Mixture of Experts/Committee. Σ k α k =1. Isabelle Guyon

Lecture 12: Ensemble Methods. Introduction. Weighted Majority. Mixture of Experts/Committee. Σ k α k =1. Isabelle Guyon Lecture 2: Enseble Methods Isabelle Guyon guyoni@inf.ethz.ch Introduction Book Chapter 7 Weighted Majority Mixture of Experts/Coittee Assue K experts f, f 2, f K (base learners) x f (x) Each expert akes

More information

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data

Supplementary to Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data Suppleentary to Learning Discriinative Bayesian Networks fro High-diensional Continuous Neuroiaging Data Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, and Dinggang Shen Proposition. Given a sparse

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

A method to determine relative stroke detection efficiencies from multiplicity distributions

A method to determine relative stroke detection efficiencies from multiplicity distributions A ethod to deterine relative stroke detection eiciencies ro ultiplicity distributions Schulz W. and Cuins K. 2. Austrian Lightning Detection and Inoration Syste (ALDIS), Kahlenberger Str.2A, 90 Vienna,

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

About the definition of parameters and regimes of active two-port networks with variable loads on the basis of projective geometry

About the definition of parameters and regimes of active two-port networks with variable loads on the basis of projective geometry About the definition of paraeters and regies of active two-port networks with variable loads on the basis of projective geoetry PENN ALEXANDR nstitute of Electronic Engineering and Nanotechnologies "D

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area

Spine Fin Efficiency A Three Sided Pyramidal Fin of Equilateral Triangular Cross-Sectional Area Proceedings of the 006 WSEAS/IASME International Conference on Heat and Mass Transfer, Miai, Florida, USA, January 18-0, 006 (pp13-18) Spine Fin Efficiency A Three Sided Pyraidal Fin of Equilateral Triangular

More information

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).

The proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013). A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial

More information

Paul M. Goggans Department of Electrical Engineering, University of Mississippi, Anderson Hall, University, Mississippi 38677

Paul M. Goggans Department of Electrical Engineering, University of Mississippi, Anderson Hall, University, Mississippi 38677 Evaluation of decay ties in coupled spaces: Bayesian decay odel selection a),b) Ning Xiang c) National Center for Physical Acoustics and Departent of Electrical Engineering, University of Mississippi,

More information

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators

Supplementary Information for Design of Bending Multi-Layer Electroactive Polymer Actuators Suppleentary Inforation for Design of Bending Multi-Layer Electroactive Polyer Actuators Bavani Balakrisnan, Alek Nacev, and Elisabeth Sela University of Maryland, College Park, Maryland 074 1 Analytical

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

Lower Bounds for Quantized Matrix Completion

Lower Bounds for Quantized Matrix Completion Lower Bounds for Quantized Matrix Copletion Mary Wootters and Yaniv Plan Departent of Matheatics University of Michigan Ann Arbor, MI Eail: wootters, yplan}@uich.edu Mark A. Davenport School of Elec. &

More information

A proposal for a First-Citation-Speed-Index Link Peer-reviewed author version

A proposal for a First-Citation-Speed-Index Link Peer-reviewed author version A proposal for a First-Citation-Speed-Index Link Peer-reviewed author version Made available by Hasselt University Library in Docuent Server@UHasselt Reference (Published version): EGGHE, Leo; Bornann,

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

Combining Classifiers

Combining Classifiers Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/

More information

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics

ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents

More information

Detection and Estimation Theory

Detection and Estimation Theory ESE 54 Detection and Estiation Theory Joseph A. O Sullivan Sauel C. Sachs Professor Electronic Systes and Signals Research Laboratory Electrical and Systes Engineering Washington University 11 Urbauer

More information

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks

Intelligent Systems: Reasoning and Recognition. Artificial Neural Networks Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

Interactive Markov Models of Evolutionary Algorithms

Interactive Markov Models of Evolutionary Algorithms Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary

More information

Inference in the Presence of Likelihood Monotonicity for Polytomous and Logistic Regression

Inference in the Presence of Likelihood Monotonicity for Polytomous and Logistic Regression Advances in Pure Matheatics, 206, 6, 33-34 Published Online April 206 in SciRes. http://www.scirp.org/journal/ap http://dx.doi.org/0.4236/ap.206.65024 Inference in the Presence of Likelihood Monotonicity

More information

W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS

W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS. Introduction When it coes to applying econoetric odels to analyze georeferenced data, researchers are well

More information

The Transactional Nature of Quantum Information

The Transactional Nature of Quantum Information The Transactional Nature of Quantu Inforation Subhash Kak Departent of Coputer Science Oklahoa State University Stillwater, OK 7478 ABSTRACT Inforation, in its counications sense, is a transactional property.

More information

12 Towards hydrodynamic equations J Nonlinear Dynamics II: Continuum Systems Lecture 12 Spring 2015

12 Towards hydrodynamic equations J Nonlinear Dynamics II: Continuum Systems Lecture 12 Spring 2015 18.354J Nonlinear Dynaics II: Continuu Systes Lecture 12 Spring 2015 12 Towards hydrodynaic equations The previous classes focussed on the continuu description of static (tie-independent) elastic systes.

More information

NUMERICAL MODELLING OF THE TYRE/ROAD CONTACT

NUMERICAL MODELLING OF THE TYRE/ROAD CONTACT NUMERICAL MODELLING OF THE TYRE/ROAD CONTACT PACS REFERENCE: 43.5.LJ Krister Larsson Departent of Applied Acoustics Chalers University of Technology SE-412 96 Sweden Tel: +46 ()31 772 22 Fax: +46 ()31

More information

Pseudo-marginal Metropolis-Hastings: a simple explanation and (partial) review of theory

Pseudo-marginal Metropolis-Hastings: a simple explanation and (partial) review of theory Pseudo-arginal Metropolis-Hastings: a siple explanation and (partial) review of theory Chris Sherlock Motivation Iagine a stochastic process V which arises fro soe distribution with density p(v θ ). Iagine

More information

General Properties of Radiation Detectors Supplements

General Properties of Radiation Detectors Supplements Phys. 649: Nuclear Techniques Physics Departent Yarouk University Chapter 4: General Properties of Radiation Detectors Suppleents Dr. Nidal M. Ershaidat Overview Phys. 649: Nuclear Techniques Physics Departent

More information

Data-Driven Imaging in Anisotropic Media

Data-Driven Imaging in Anisotropic Media 18 th World Conference on Non destructive Testing, 16- April 1, Durban, South Africa Data-Driven Iaging in Anisotropic Media Arno VOLKER 1 and Alan HUNTER 1 TNO Stieltjesweg 1, 6 AD, Delft, The Netherlands

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

Kinetic Theory of Gases: Elementary Ideas

Kinetic Theory of Gases: Elementary Ideas Kinetic Theory of Gases: Eleentary Ideas 17th February 2010 1 Kinetic Theory: A Discussion Based on a Siplified iew of the Motion of Gases 1.1 Pressure: Consul Engel and Reid Ch. 33.1) for a discussion

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS

AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS Statistica Sinica 6 016, 1709-178 doi:http://dx.doi.org/10.5705/ss.0014.0034 AN OPTIMAL SHRINKAGE FACTOR IN PREDICTION OF ORDERED RANDOM EFFECTS Nilabja Guha 1, Anindya Roy, Yaakov Malinovsky and Gauri

More information

Bayesian inference for stochastic differential mixed effects models - initial steps

Bayesian inference for stochastic differential mixed effects models - initial steps Bayesian inference for stochastic differential ixed effects odels - initial steps Gavin Whitaker 2nd May 2012 Supervisors: RJB and AG Outline Mixed Effects Stochastic Differential Equations (SDEs) Bayesian

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Tracking using CONDENSATION: Conditional Density Propagation

Tracking using CONDENSATION: Conditional Density Propagation Tracking using CONDENSATION: Conditional Density Propagation Goal Model-based visual tracking in dense clutter at near video frae rates M. Isard and A. Blake, CONDENSATION Conditional density propagation

More information

Kinematics and dynamics, a computational approach

Kinematics and dynamics, a computational approach Kineatics and dynaics, a coputational approach We begin the discussion of nuerical approaches to echanics with the definition for the velocity r r ( t t) r ( t) v( t) li li or r( t t) r( t) v( t) t for

More information

Support Vector Machines MIT Course Notes Cynthia Rudin

Support Vector Machines MIT Course Notes Cynthia Rudin Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance

More information

Measuring Temperature with a Silicon Diode

Measuring Temperature with a Silicon Diode Measuring Teperature with a Silicon Diode Due to the high sensitivity, nearly linear response, and easy availability, we will use a 1N4148 diode for the teperature transducer in our easureents 10 Analysis

More information

An Improved Particle Filter with Applications in Ballistic Target Tracking

An Improved Particle Filter with Applications in Ballistic Target Tracking Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing

More information

III.H Zeroth Order Hydrodynamics

III.H Zeroth Order Hydrodynamics III.H Zeroth Order Hydrodynaics As a first approxiation, we shall assue that in local equilibriu, the density f 1 at each point in space can be represented as in eq.iii.56, i.e. f 0 1 p, q, t = n q, t

More information

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5,

Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, Sequence Analysis, WS 14/15, D. Huson & R. Neher (this part by D. Huson) February 5, 2015 31 11 Motif Finding Sources for this section: Rouchka, 1997, A Brief Overview of Gibbs Sapling. J. Buhler, M. Topa:

More information

Kinetic Theory of Gases: Elementary Ideas

Kinetic Theory of Gases: Elementary Ideas Kinetic Theory of Gases: Eleentary Ideas 9th February 011 1 Kinetic Theory: A Discussion Based on a Siplified iew of the Motion of Gases 1.1 Pressure: Consul Engel and Reid Ch. 33.1) for a discussion of

More information

ma x = -bv x + F rod.

ma x = -bv x + F rod. Notes on Dynaical Systes Dynaics is the study of change. The priary ingredients of a dynaical syste are its state and its rule of change (also soeties called the dynaic). Dynaical systes can be continuous

More information

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair

A Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving

More information

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

e-companion ONLY AVAILABLE IN ELECTRONIC FORM OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer

More information

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical

ASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul

More information

Sampling How Big a Sample?

Sampling How Big a Sample? C. G. G. Aitken, 1 Ph.D. Sapling How Big a Saple? REFERENCE: Aitken CGG. Sapling how big a saple? J Forensic Sci 1999;44(4):750 760. ABSTRACT: It is thought that, in a consignent of discrete units, a certain

More information

Pattern Recognition and Machine Learning. Artificial Neural networks

Pattern Recognition and Machine Learning. Artificial Neural networks Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial

More information

i ij j ( ) sin cos x y z x x x interchangeably.)

i ij j ( ) sin cos x y z x x x interchangeably.) Tensor Operators Michael Fowler,2/3/12 Introduction: Cartesian Vectors and Tensors Physics is full of vectors: x, L, S and so on Classically, a (three-diensional) vector is defined by its properties under

More information

Figure 1: Equivalent electric (RC) circuit of a neurons membrane

Figure 1: Equivalent electric (RC) circuit of a neurons membrane Exercise: Leaky integrate and fire odel of neural spike generation This exercise investigates a siplified odel of how neurons spike in response to current inputs, one of the ost fundaental properties of

More information