Estimating Covariance Using Factorial Hidden Markov Models
|
|
- Jonathan Price
- 5 years ago
- Views:
Transcription
1 Estimating Covariance Using Factorial Hidden Markov Models João Sedoc 1,2 with: Jordan Rodu 3, Lyle Ungar 1, Dean Foster 1 and Jean Gallier 1 1 University of Pennsylvania Philadelphia, PA joao@cis.upenn.edu 2 Chivalric Trading 3 Carnegie Mellon University Pittsburg, PA PGMO Conference, October 29, 2014 João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 1 / 42
2 Outline 1 Motivation What s Novel? Portfolio Optimization Non-Stationary Covariance 2 Introduction to Factorial HMMs HMM Application to Problems Quick Overview of Hidden Markov Models Estimation Factorial HMM 3 Empirical Results 4 Conclusion João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 2 / 42
3 Outline 1 Motivation What s Novel? Portfolio Optimization Non-Stationary Covariance 2 Introduction to Factorial HMMs HMM Application to Problems Quick Overview of Hidden Markov Models Estimation Factorial HMM 3 Empirical Results 4 Conclusion João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 3 / 42
4 What s Novel- Innovations to Factorial HMMs Multiple time horizon HMM using a structured approach João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 4 / 42
5 What s Novel- Innovations to Factorial HMMs Multiple time horizon HMM using a structured approach Incorporation of high frequency data João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 4 / 42
6 What s Novel- Innovations to Factorial HMMs Multiple time horizon HMM using a structured approach Incorporation of high frequency data Estimation in near real time João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 4 / 42
7 What s Novel- Innovations to Factorial HMMs Multiple time horizon HMM using a structured approach Incorporation of high frequency data Estimation in near real time Continuous emission HMM João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 4 / 42
8 What s Novel- Innovations to Factorial HMMs Multiple time horizon HMM using a structured approach Incorporation of high frequency data Estimation in near real time Continuous emission HMM Provable bounds João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 4 / 42
9 What s Novel- Innovations to Factorial HMMs Multiple time horizon HMM using a structured approach Incorporation of high frequency data Estimation in near real time Continuous emission HMM Provable bounds Incorporation of exogenous data João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 4 / 42
10 Outline 1 Motivation What s Novel? Portfolio Optimization Non-Stationary Covariance 2 Introduction to Factorial HMMs HMM Application to Problems Quick Overview of Hidden Markov Models Estimation Factorial HMM 3 Empirical Results 4 Conclusion João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 5 / 42
11 What s Novel- Application to Portfolio Optimization Markowitz optimization is a well know theory, but hard to do right João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 6 / 42
12 What s Novel- Application to Portfolio Optimization Markowitz optimization is a well know theory, but hard to do right The allocation is optimized under exponential utility argmax α pos P T α pos 1 2ζ αt posσα pos where α pos is the notional allocation, p is the asset price at time t, P t = E[p t+τ p t ] is the expected profit, Σ is the asset return covariance matrix, and ζ is the risk aversion free variable. João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 6 / 42
13 What s Novel- Application to Portfolio Optimization Markowitz optimization is a well know theory, but hard to do right The allocation is optimized under exponential utility argmax α pos P T α pos 1 2ζ αt posσα pos where α pos is the notional allocation, p is the asset price at time t, P t = E[p t+τ p t ] is the expected profit, Σ is the asset return covariance matrix, and ζ is the risk aversion free variable. In this talk we will only focus on improving covariance estimation João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 6 / 42
14 What s Novel- Application to Portfolio Optimization Markowitz optimization is a well know theory, but hard to do right The allocation is optimized under exponential utility argmax α pos P T α pos 1 2ζ αt posσα pos where α pos is the notional allocation, p is the asset price at time t, P t = E[p t+τ p t ] is the expected profit, Σ is the asset return covariance matrix, and ζ is the risk aversion free variable. In this talk we will only focus on improving covariance estimation We want a better estimate of Σ Σ t João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 6 / 42
15 Drawbacks of Current Models Modern approaches are constrained by computational complexity João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 7 / 42
16 Drawbacks of Current Models Modern approaches are constrained by computational complexity Trade off between model richness and data richness João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 7 / 42
17 Drawbacks of Current Models Modern approaches are constrained by computational complexity Trade off between model richness and data richness Difficult to both explain and identify the model João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 7 / 42
18 Drawbacks of Current Models Modern approaches are constrained by computational complexity Trade off between model richness and data richness Difficult to both explain and identify the model Incorporation of exogenous data is often difficult in empirical models João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 7 / 42
19 Outline 1 Motivation What s Novel? Portfolio Optimization Non-Stationary Covariance 2 Introduction to Factorial HMMs HMM Application to Problems Quick Overview of Hidden Markov Models Estimation Factorial HMM 3 Empirical Results 4 Conclusion João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 8 / 42
20 S&P 500 realized variance Figure: S&P 500 variance (second resolution) João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 9 / 42
21 S&P 500 and 30 Year Treasury realized covariance Figure: S&P 500 and 30 Year Treasury covariance (second resolution) João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 10 / 42
22 Outline 1 Motivation What s Novel? Portfolio Optimization Non-Stationary Covariance 2 Introduction to Factorial HMMs HMM Application to Problems Quick Overview of Hidden Markov Models Estimation Factorial HMM 3 Empirical Results 4 Conclusion João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 11 / 42
23 Common Applications of Hidden Markov Models Gene recognition Robotics Natural language processing tasks Speech Recognition João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 12 / 42
24 Outline 1 Motivation What s Novel? Portfolio Optimization Non-Stationary Covariance 2 Introduction to Factorial HMMs HMM Application to Problems Quick Overview of Hidden Markov Models Estimation Factorial HMM 3 Empirical Results 4 Conclusion João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 13 / 42
25 Hidden Markov Models There are two primary assumptions for this basic HMM: 1 The underlying hidden state process is Markovian 2 Given the hidden states, the observations are independent t t + 1 t + 2 h t h t+1 h t+2 x t x t+1 x t+2 Figure: HMM with states h t, h t+1, and h t+2 that emit observations x t, x t+1, and x t+2 respectively. João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 14 / 42
26 Hidden Markov Models The probability distribution over the next hidden state at time t + 1 depends only on the current hidden state at time t Pr(h t+1 h t,..., h 1 ) = Pr(h t+1 h t ). João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 15 / 42
27 The Hidden Markov Model parameters T = Pr(h t+1 h t = i). π = Pr(x t+1 h t+1 ) Collection of λ(x) s Pr(h 1 ). João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 16 / 42
28 Hidden Markov Models The likelihood of a sequence of observations from a specified model is Pr(x 1,..., x t ) = [π] h1 h 1,...,h t j=2 t t [T ] hj,h j 1 [λ(x j )] hj though we will not consider this particular form of the likelihood. Instead, we will look at a new form for the likelihood, j=1 Pr(x t,..., x 1 ) = 1 A(x t ) A(x 1 )π where λ(x) is the distribution of the observation given a hidden state, and A(x t ) = T diag(λ(x)). João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 17 / 42
29 Hidden Markov Models λ(x) = Pr(x h) Pr(h t+1, x h t = 1) A(x) = = Figure: A(x), graphically João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 18 / 42
30 Outline 1 Motivation What s Novel? Portfolio Optimization Non-Stationary Covariance 2 Introduction to Factorial HMMs HMM Application to Problems Quick Overview of Hidden Markov Models Estimation Factorial HMM 3 Empirical Results 4 Conclusion João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 19 / 42
31 Spectral Methods for Estimation Spectral methods use singular value decomposition (SVD) and method of moments. João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 20 / 42
32 Spectral Methods for Estimation Spectral methods use singular value decomposition (SVD) and method of moments. Fast SVD instead of forward/backward method EM estimation. João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 20 / 42
33 Spectral Methods for Estimation Spectral methods use singular value decomposition (SVD) and method of moments. Fast SVD instead of forward/backward method EM estimation. Computing observables for spectral estimation of an HMM, fully reduced third moment. João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 20 / 42
34 Spectral Methods for Estimation Spectral methods use singular value decomposition (SVD) and method of moments. Fast SVD instead of forward/backward method EM estimation. Computing observables for spectral estimation of an HMM, fully reduced third moment. Estimation speed is critical given the size of high frequency financial datasets. João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 20 / 42
35 Spectral Methods for Estimation Spectral methods use singular value decomposition (SVD) and method of moments. Fast SVD instead of forward/backward method EM estimation. Computing observables for spectral estimation of an HMM, fully reduced third moment. Estimation speed is critical given the size of high frequency financial datasets. For US equities sampling per second yields roughly 5 million data points per year per stock! João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 20 / 42
36 Spectral Algorithm Sketch Calculate E[X 2 X 1 ]. Calculate fast SVD of E[X 2 X 1 ] keeping k left singular vectors. Reduce the data where ŷ = Û x. Compute the first three moments E[Y 1 ], E[Y 2 Y 1 ], E[Y 3 Y 1 Y 2 ]. In the discrete case, Pr(x t,..., x 1 ) = b B(y t ) B(y 1 )b 1 where B(y) is the similarity transform of A(x). João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 21 / 42
37 Generalization to the Continuous Case To generalize to the continuous case we need to take expectations where, Pr(x t,..., x 1 ) = b B(G(x t )) B(G(x 1 ))b 1 and G(x) is an estimate of E[Y 2 x 1 ]. B(G(x)) is exactly what we want, up to a constant factor depending on x. João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 22 / 42
38 Outline 1 Motivation What s Novel? Portfolio Optimization Non-Stationary Covariance 2 Introduction to Factorial HMMs HMM Application to Problems Quick Overview of Hidden Markov Models Estimation Factorial HMM 3 Empirical Results 4 Conclusion João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 23 / 42
39 Factorial HMM Different state layers evolve differently Figure: Factorial HMM diagram João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 24 / 42
40 Factorial HMM Figure: Structured Factorial HMM diagram João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 25 / 42
41 Structured Factorial HMM Differences Improvements João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 26 / 42
42 Structured Factorial HMM Differences Improvements Faster estimation using Spectral methods João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 26 / 42
43 Structured Factorial HMM Differences Improvements Faster estimation using Spectral methods Intuition about time horizon Simple layer aggregation João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 26 / 42
44 Structured Factorial HMM Differences Improvements Faster estimation using Spectral methods Intuition about time horizon Simple layer aggregation Drawbacks Jumps in covariance estimation at hourly boundaries João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 26 / 42
45 Structured Factorial HMM Differences Improvements Faster estimation using Spectral methods Intuition about time horizon Simple layer aggregation Drawbacks Jumps in covariance estimation at hourly boundaries Heuristic choice of time horizon João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 26 / 42
46 Structured Factorial HMM Differences Improvements Faster estimation using Spectral methods Intuition about time horizon Simple layer aggregation Drawbacks Jumps in covariance estimation at hourly boundaries Heuristic choice of time horizon Requires lots of data João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 26 / 42
47 Stock Covariance Model Horizon RMSE N training N out of sample CAPM daily CAPM hourly CAPM second PCA (1) daily PCA (1) hourly PCA (1) second GARCH daily GARCH hourly GARCH second FHMM daily FHMM hourly FHMM second (1) 15 principal components João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 27 / 42
48 Summary Multiple time frames Major Contributions João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 28 / 42
49 Summary Multiple time frames Richer model Major Contributions João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 28 / 42
50 Summary Major Contributions Multiple time frames Richer model Intuitive explanation of model João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 28 / 42
51 Summary Major Contributions Multiple time frames Richer model Intuitive explanation of model Fast estimation João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 28 / 42
52 Thanks for listening! João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 29 / 42
53 Future Work Empirical frequency selection João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 30 / 42
54 Future Work Empirical frequency selection Expansion to other datasets (energy / weather) João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 30 / 42
55 Future Work Empirical frequency selection Expansion to other datasets (energy / weather) Better estimation on lower time horizons João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 30 / 42
56 Future Work Empirical frequency selection Expansion to other datasets (energy / weather) Better estimation on lower time horizons Test more distributions for G(x) João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 30 / 42
57 For Further Reading I Spectral Algorithm for Learning Hidden Markov Models Hsu, Kakade, Zhang 2009 Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. Halko, Martinsson, Tropp 2011 Using Regression for Spectral Estimation, Foster, Rodu, Ungar, Wu 2013 Two Step CCA: A new spectral method for estimating vector models of words, Dhillon, Foster, Rodu, Ungar 2013 Spectral Dependency Parsing with Latent Variables, Collins, Dhillon, Foster, Rodu, Ungar 2012 Spectral Dimensionality Reduction for HMMs, Foster, Rodu, Ungar 2012 Papers and Projects In Progress Spectral Estimation of HMMs with a continuous output distribution, Foster, Rodu, Ungar (in progress) Spectral Estimation of hierarchical HMMs, Foster, Rodu, Sedoc, Ungar (in progress) João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 31 / 42
58 Appendix João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 32 / 42
59 Spectral Methods for Estimation In this section we will describe how to build the observables B(x). First note that the first three moments of the data from an HMM yield the following theoretical form: E[X 1 ] = Mπ E[X 2 X 1 ] = MT diag(π) M E[X 3 X 1 X 2 ] = MT diag(λ(x)) T diag(π) M where in this particular setting X 1 is Pr(Σ t 1 ), X 2 is Pr(Σ t ), X 3 is Pr(Σ t+1 ), π is the initial state vector, and M is the expected value of x given hidden state i. João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 33 / 42
60 Spectral Algorithm Sketch Calculate E[X 2 X 1 ]. Calculate fast SVD of E[X 2 X 1 ] keeping k left singular vectors. Reduce the data where ŷ = Û x. Compute the first three moments E[Y 1 ], E[Y 2 Y 1 ], E[Y 3 Y 1 Y 2 ]. Consider an U such that U M is invertible, then estimating the second and third moments with reduced data y = U x allows in the discrete case, B(x) E[Y 3 Y 1 Y 2 ](λ(x))e[y 2 Y 1 ] 1 = (U M)T diagλ(x)(u M) 1. João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 34 / 42
61 Generalization to the Continuous Case To generalize to the continuous case we need to take expectations where, B(G(x)) = (U M)T diagλ(x)(u M) 1 1 Pr(x) where Pr(x) is the marginal probability, and G(x) is a function of E[Y 2 x 1 ]. B(G(x)) is exactly what we want, up to a constant factor depending on x as Pr(Y 1,..., Y t ) b B(G(x t )) B(G(x 1 )) b 1. João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 35 / 42
62 Outline Continuous Emission HMM João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 36 / 42
63 Continuous Emission HMM Define g(x) E[Y 2 x 1 ]. Let h t be the probability vector associated with begin in a particular state at time t. Then Also, thus E[y 2 h 2 ] = U M h 2. E[h 2 h 1 ] = T h 1. E[y 2 h 1 ] = U MT h 1 João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 37 / 42
64 Continuous HMM Emission To establish a belief about h 1 given x 1, recall from Bayes formula Pr(h 1 x 1 ) = Pr(x 1 h 1 ) Pr(h 1 ) Pr(x 1 ) We can arrange each probability into a vector, and because in the indicator vector case the probability vector is the same as the expected value vector, we have, in vector notation E[h 1 x 1 ] = diagπλ(x) π λ(x) and so putting together the pieces we get E[y 2 x 1 ] = U MT diagπλ(x) π λ(x) João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 38 / 42
65 Continuous HMM Emission Recall that the goal is to isolate λ(x). Note that E[y 2 y 1 ] 1 g(x) = (M U) 1 λ(x) π λ(x) G(x) When this is plugged into our fully reduced version of B(γ), we get B(G(x)) = (U M)T diagm UG(x)(U M) 1 = (U M)T diagλ(x)(u M) 1 1 Pr(x) where Pr(x) is the marginal probability. B(G(x)) is exactly what we want, up to a constant factor depending on x. João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 39 / 42
66 Spectral Estimation Algorithm Algorithm 1 Computing observables for spectral estimation of an HMM, fully reduced third moment 1: Input: Training examples- x (i) for i {1,..., M} where x (i) = x (i) 1, x (i) 2, x (i) 3. 2: Compute Ê[x 2 x 1 ] = 1 m M i=1 x (i) 2 x (i) 1. 3: Compute the left k eigenvectors corresponding to the top k eigenvalues of Σ. Call the matrix of these eigenvectors Û. 4: Reduce data: ŷ = Û x. 5: Compute ˆµ = 1 M M i=1 y (i) 1, ˆΣ = 1 M M i=1 y (i) 2 y (i) 1 and tensor Ĉ = 1 M M i=1 y (i) 3 y (i) 1 y (i) 2. 6: Set ˆb 1 = ˆµ and b = b1 ˆΣ 1 7: Right multiply each slice of the tensor in the y 2 direction (so y 2 is being sliced up, leaving the y 3 y1 matrices intact) by ˆΣ 1 to form ˆB(γ) = Ĉ(γ)ˆΣ 1 João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 40 / 42
67 Similarity Transform from A(x) to B(x) Unfortunately, A(x) isn t directly learnable. However an appropriate similarity transformation of A(x) (of which there are more than one) is learnable by the method of moments, bypassing the need to recover the HMM parameters, and still gets us what we want. Note that P(x 1,..., x t ) = 1 A(x t ) A(x 1 ) π = 1 S 1 }{{} b SA(x t )S 1 }{{} B(x t) b B(x t ) B(x 1 ) b 1 S S 1 SA(x 1 )S 1 }{{} Sπ b 1 João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 41 / 42
68 Markowitz Optimization Given a vector of current prices p t and unknown future prices P tτ the market value is Ψ α = α T pos (P t+τ p t ) (1) Assuming that the market is Gaussian, the price distribution is P t+τ N (µ, Σ) (2) Therefore the distribution of the portfolio is ) Ψ α N (α T pos (µ p t ), α T posσα pos The allocation is optimized under exponential utility, having risk-aversion parameter ζ, and the certainty equivalent by the quadratic program QP argmax αpos CE(α pos ) = P T α pos 1 2ζ αt posσα pos (4) where P is the expected profit, roughly defined as P t = E[P t+τ p t ]. Numeric optimizers seek to minimize, define the objective function as f (α) CE(α). (3) João Sedoc Estimating Covariance Using Factorial Hidden Markov Models 42 / 42
Reduced-Rank Hidden Markov Models
Reduced-Rank Hidden Markov Models Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University ... x 1 x 2 x 3 x τ y 1 y 2 y 3 y τ Sequence of observations: Y =[y 1 y 2 y 3... y τ ] Assume
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 11 Project
More informationSTA 414/2104: Machine Learning
STA 414/2104: Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistics! rsalakhu@cs.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 9 Sequential Data So far
More informationReasoning Under Uncertainty Over Time. CS 486/686: Introduction to Artificial Intelligence
Reasoning Under Uncertainty Over Time CS 486/686: Introduction to Artificial Intelligence 1 Outline Reasoning under uncertainty over time Hidden Markov Models Dynamic Bayes Nets 2 Introduction So far we
More informationUsing Regression for Spectral Estimation of HMMs
Using Regression for Spectral Estimation of HMMs Abstract. Hidden Markov Models (HMMs) are widely used to model discrete time series data, but the EM and Gibbs sampling methods used to estimate them are
More informationSTA 4273H: Sta-s-cal Machine Learning
STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Computer Science! Department of Statistical Sciences! rsalakhu@cs.toronto.edu! h0p://www.cs.utoronto.ca/~rsalakhu/ Lecture 2 In our
More informationDimension Reduction. David M. Blei. April 23, 2012
Dimension Reduction David M. Blei April 23, 2012 1 Basic idea Goal: Compute a reduced representation of data from p -dimensional to q-dimensional, where q < p. x 1,...,x p z 1,...,z q (1) We want to do
More informationProbabilistic Time Series Classification
Probabilistic Time Series Classification Y. Cem Sübakan Boğaziçi University 25.06.2013 Y. Cem Sübakan (Boğaziçi University) M.Sc. Thesis Defense 25.06.2013 1 / 54 Problem Statement The goal is to assign
More informationFE670 Algorithmic Trading Strategies. Stevens Institute of Technology
FE670 Algorithmic Trading Strategies Lecture 8. Robust Portfolio Optimization Steve Yang Stevens Institute of Technology 10/17/2013 Outline 1 Robust Mean-Variance Formulations 2 Uncertain in Expected Return
More informationFactor Analysis and Kalman Filtering (11/2/04)
CS281A/Stat241A: Statistical Learning Theory Factor Analysis and Kalman Filtering (11/2/04) Lecturer: Michael I. Jordan Scribes: Byung-Gon Chun and Sunghoon Kim 1 Factor Analysis Factor analysis is used
More informationHidden Markov Models and Gaussian Mixture Models
Hidden Markov Models and Gaussian Mixture Models Hiroshi Shimodaira and Steve Renals Automatic Speech Recognition ASR Lectures 4&5 23&27 January 2014 ASR Lectures 4&5 Hidden Markov Models and Gaussian
More informationLecture 21: Spectral Learning for Graphical Models
10-708: Probabilistic Graphical Models 10-708, Spring 2016 Lecture 21: Spectral Learning for Graphical Models Lecturer: Eric P. Xing Scribes: Maruan Al-Shedivat, Wei-Cheng Chang, Frederick Liu 1 Motivation
More informationJoint Factor Analysis for Speaker Verification
Joint Factor Analysis for Speaker Verification Mengke HU ASPITRG Group, ECE Department Drexel University mengke.hu@gmail.com October 12, 2012 1/37 Outline 1 Speaker Verification Baseline System Session
More informationLearning about State. Geoff Gordon Machine Learning Department Carnegie Mellon University
Learning about State Geoff Gordon Machine Learning Department Carnegie Mellon University joint work with Byron Boots, Sajid Siddiqi, Le Song, Alex Smola What s out there?...... ot-2 ot-1 ot ot+1 ot+2 2
More informationTensor Decompositions for Machine Learning. G. Roeder 1. UBC Machine Learning Reading Group, June University of British Columbia
Network Feature s Decompositions for Machine Learning 1 1 Department of Computer Science University of British Columbia UBC Machine Learning Group, June 15 2016 1/30 Contact information Network Feature
More informationSpectral Learning for Non-Deterministic Dependency Parsing
Spectral Learning for Non-Deterministic Dependency Parsing Franco M. Luque 1 Ariadna Quattoni 2 Borja Balle 2 Xavier Carreras 2 1 Universidad Nacional de Córdoba 2 Universitat Politècnica de Catalunya
More information26 : Spectral GMs. Lecturer: Eric P. Xing Scribes: Guillermo A Cidre, Abelino Jimenez G.
10-708: Probabilistic Graphical Models, Spring 2015 26 : Spectral GMs Lecturer: Eric P. Xing Scribes: Guillermo A Cidre, Abelino Jimenez G. 1 Introduction A common task in machine learning is to work with
More informationMultivariate GARCH models.
Multivariate GARCH models. Financial market volatility moves together over time across assets and markets. Recognizing this commonality through a multivariate modeling framework leads to obvious gains
More informationMFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 4, 2015
& & MFM Practitioner Module: Risk & Asset Allocation February 4, 2015 & Meucci s Program for Asset Allocation detect market invariance select the invariants estimate the market specify the distribution
More informationL11: Pattern recognition principles
L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction
More informationLECTURE NOTE #10 PROF. ALAN YUILLE
LECTURE NOTE #10 PROF. ALAN YUILLE 1. Principle Component Analysis (PCA) One way to deal with the curse of dimensionality is to project data down onto a space of low dimensions, see figure (1). Figure
More informationTensor Methods for Feature Learning
Tensor Methods for Feature Learning Anima Anandkumar U.C. Irvine Feature Learning For Efficient Classification Find good transformations of input for improved classification Figures used attributed to
More informationMaximum variance formulation
12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal
More informationIntroduction to Graphical Models
Introduction to Graphical Models The 15 th Winter School of Statistical Physics POSCO International Center & POSTECH, Pohang 2018. 1. 9 (Tue.) Yung-Kyun Noh GENERALIZATION FOR PREDICTION 2 Probabilistic
More informationPCA, Kernel PCA, ICA
PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per
More informationChapter 4 Dynamic Bayesian Networks Fall Jin Gu, Michael Zhang
Chapter 4 Dynamic Bayesian Networks 2016 Fall Jin Gu, Michael Zhang Reviews: BN Representation Basic steps for BN representations Define variables Define the preliminary relations between variables Check
More informationIdentifying Financial Risk Factors
Identifying Financial Risk Factors with a Low-Rank Sparse Decomposition Lisa Goldberg Alex Shkolnik Berkeley Columbia Meeting in Engineering and Statistics 24 March 2016 Outline 1 A Brief History of Factor
More informationCS281 Section 4: Factor Analysis and PCA
CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we
More informationRoss (1976) introduced the Arbitrage Pricing Theory (APT) as an alternative to the CAPM.
4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.
More informationBayesian Networks BY: MOHAMAD ALSABBAGH
Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional
More informationGuaranteed Learning of Latent Variable Models through Spectral and Tensor Methods
Guaranteed Learning of Latent Variable Models through Spectral and Tensor Methods Anima Anandkumar U.C. Irvine Application 1: Clustering Basic operation of grouping data points. Hypothesis: each data point
More informationR = µ + Bf Arbitrage Pricing Model, APM
4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.
More informationPattern Recognition and Machine Learning
Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability
More informationFinancial Econometrics Return Predictability
Financial Econometrics Return Predictability Eric Zivot March 30, 2011 Lecture Outline Market Efficiency The Forms of the Random Walk Hypothesis Testing the Random Walk Hypothesis Reading FMUND, chapter
More informationHidden Markov Models and Gaussian Mixture Models
Hidden Markov Models and Gaussian Mixture Models Hiroshi Shimodaira and Steve Renals Automatic Speech Recognition ASR Lectures 4&5 25&29 January 2018 ASR Lectures 4&5 Hidden Markov Models and Gaussian
More information9 Forward-backward algorithm, sum-product on factor graphs
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 9 Forward-backward algorithm, sum-product on factor graphs The previous
More informationDynamic models 1 Kalman filters, linearization,
Koller & Friedman: Chapter 16 Jordan: Chapters 13, 15 Uri Lerner s Thesis: Chapters 3,9 Dynamic models 1 Kalman filters, linearization, Switching KFs, Assumed density filters Probabilistic Graphical Models
More informationDeep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści
Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, 2017 Spis treści Website Acknowledgments Notation xiii xv xix 1 Introduction 1 1.1 Who Should Read This Book?
More informationEfficient Spectral Methods for Learning Mixture Models!
Efficient Spectral Methods for Learning Mixture Models Qingqing Huang 2016 February Laboratory for Information & Decision Systems Based on joint works with Munther Dahleh, Rong Ge, Sham Kakade, Greg Valiant.
More informationPCA and admixture models
PCA and admixture models CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar, Alkes Price PCA and admixture models 1 / 57 Announcements HW1
More informationHidden Markov Models
Hidden Markov Models Slides mostly from Mitch Marcus and Eric Fosler (with lots of modifications). Have you seen HMMs? Have you seen Kalman filters? Have you seen dynamic programming? HMMs are dynamic
More informationHidden Markov Models. Terminology, Representation and Basic Problems
Hidden Markov Models Terminology, Representation and Basic Problems Data analysis? Machine learning? In bioinformatics, we analyze a lot of (sequential) data (biological sequences) to learn unknown parameters
More informationA Constraint Generation Approach to Learning Stable Linear Dynamical Systems
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems Sajid M. Siddiqi Byron Boots Geoffrey J. Gordon Carnegie Mellon University NIPS 2007 poster W22 steam Application: Dynamic Textures
More informationA Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes (bilmes@cs.berkeley.edu) International Computer Science Institute
More informationPortfolio Allocation using High Frequency Data. Jianqing Fan
Portfolio Allocation using High Frequency Data Princeton University With Yingying Li and Ke Yu http://www.princeton.edu/ jqfan September 10, 2010 About this talk How to select sparsely optimal portfolio?
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationHuman-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg
Temporal Reasoning Kai Arras, University of Freiburg 1 Temporal Reasoning Contents Introduction Temporal Reasoning Hidden Markov Models Linear Dynamical Systems (LDS) Kalman Filter 2 Temporal Reasoning
More informationAn Introduction to Spectral Learning
An Introduction to Spectral Learning Hanxiao Liu November 8, 2013 Outline 1 Method of Moments 2 Learning topic models using spectral properties 3 Anchor words Preliminaries X 1,, X n p (x; θ), θ = (θ 1,
More informationDecember 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis
.. December 20, 2013 Todays lecture. (PCA) (PLS-R) (LDA) . (PCA) is a method often used to reduce the dimension of a large dataset to one of a more manageble size. The new dataset can then be used to make
More informationPrincipal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,
Principal Component Analysis (PCA) PCA is a widely used statistical tool for dimension reduction. The objective of PCA is to find common factors, the so called principal components, in form of linear combinations
More information13: Variational inference II
10-708: Probabilistic Graphical Models, Spring 2015 13: Variational inference II Lecturer: Eric P. Xing Scribes: Ronghuo Zheng, Zhiting Hu, Yuntian Deng 1 Introduction We started to talk about variational
More informationStatistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters. Lecturer: Drew Bagnell Scribe:Greydon Foil 1
Statistical Techniques in Robotics (16-831, F12) Lecture#17 (Wednesday October 31) Kalman Filters Lecturer: Drew Bagnell Scribe:Greydon Foil 1 1 Gauss Markov Model Consider X 1, X 2,...X t, X t+1 to be
More informationPrincipal Component Analysis
Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used
More informationHidden Markov Models
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Hidden Markov Models Matt Gormley Lecture 22 April 2, 2018 1 Reminders Homework
More informationUNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013
UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013 Exam policy: This exam allows two one-page, two-sided cheat sheets; No other materials. Time: 2 hours. Be sure to write your name and
More informationMFM Practitioner Module: Risk & Asset Allocation. John Dodson. February 18, 2015
MFM Practitioner Module: Risk & Asset Allocation February 18, 2015 No introduction to portfolio optimization would be complete without acknowledging the significant contribution of the Markowitz mean-variance
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 7 Approximate
More informationCS 195-5: Machine Learning Problem Set 1
CS 95-5: Machine Learning Problem Set Douglas Lanman dlanman@brown.edu 7 September Regression Problem Show that the prediction errors y f(x; ŵ) are necessarily uncorrelated with any linear function of
More informationStructure in Data. A major objective in data analysis is to identify interesting features or structure in the data.
Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two
More informationGaussian Mixture Models
Gaussian Mixture Models David Rosenberg, Brett Bernstein New York University April 26, 2017 David Rosenberg, Brett Bernstein (New York University) DS-GA 1003 April 26, 2017 1 / 42 Intro Question Intro
More informationCorrelation Matrices and the Perron-Frobenius Theorem
Wilfrid Laurier University July 14 2014 Acknowledgements Thanks to David Melkuev, Johnew Zhang, Bill Zhang, Ronnnie Feng Jeyan Thangaraj for research assistance. Outline The - theorem extensions to negative
More informationInference and estimation in probabilistic time series models
1 Inference and estimation in probabilistic time series models David Barber, A Taylan Cemgil and Silvia Chiappa 11 Time series The term time series refers to data that can be represented as a sequence
More informationTheory and Applications of High Dimensional Covariance Matrix Estimation
1 / 44 Theory and Applications of High Dimensional Covariance Matrix Estimation Yuan Liao Princeton University Joint work with Jianqing Fan and Martina Mincheva December 14, 2011 2 / 44 Outline 1 Applications
More informationDiscrete Mathematics and Probability Theory Fall 2015 Lecture 21
CS 70 Discrete Mathematics and Probability Theory Fall 205 Lecture 2 Inference In this note we revisit the problem of inference: Given some data or observations from the world, what can we infer about
More informationEmpirical properties of large covariance matrices in finance
Empirical properties of large covariance matrices in finance Ex: RiskMetrics Group, Geneva Since 2010: Swissquote, Gland December 2009 Covariance and large random matrices Many problems in finance require
More informationThe Expectation-Maximization Algorithm
1/29 EM & Latent Variable Models Gaussian Mixture Models EM Theory The Expectation-Maximization Algorithm Mihaela van der Schaar Department of Engineering Science University of Oxford MLE for Latent Variable
More informationLinear Dynamical Systems
Linear Dynamical Systems Sargur N. srihari@cedar.buffalo.edu Machine Learning Course: http://www.cedar.buffalo.edu/~srihari/cse574/index.html Two Models Described by Same Graph Latent variables Observations
More informationIntroduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak
Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,
More informationSupplemental for Spectral Algorithm For Latent Tree Graphical Models
Supplemental for Spectral Algorithm For Latent Tree Graphical Models Ankur P. Parikh, Le Song, Eric P. Xing The supplemental contains 3 main things. 1. The first is network plots of the latent variable
More informationThe Multivariate Gaussian Distribution [DRAFT]
The Multivariate Gaussian Distribution DRAFT David S. Rosenberg Abstract This is a collection of a few key and standard results about multivariate Gaussian distributions. I have not included many proofs,
More informationDimensionality Reduction and Principle Components Analysis
Dimensionality Reduction and Principle Components Analysis 1 Outline What is dimensionality reduction? Principle Components Analysis (PCA) Example (Bishop, ch 12) PCA vs linear regression PCA as a mixture
More informationMachine learning for pervasive systems Classification in high-dimensional spaces
Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version
More informationOptimal Investment Strategies: A Constrained Optimization Approach
Optimal Investment Strategies: A Constrained Optimization Approach Janet L Waldrop Mississippi State University jlc3@ramsstateedu Faculty Advisor: Michael Pearson Pearson@mathmsstateedu Contents Introduction
More informationAdvanced Data Science
Advanced Data Science Dr. Kira Radinsky Slides Adapted from Tom M. Mitchell Agenda Topics Covered: Time series data Markov Models Hidden Markov Models Dynamic Bayes Nets Additional Reading: Bishop: Chapter
More informationAppendix A. Proof to Theorem 1
Appendix A Proof to Theorem In this section, we prove the sample complexity bound given in Theorem The proof consists of three main parts In Appendix A, we prove perturbation lemmas that bound the estimation
More informationECE521 Lecture 19 HMM cont. Inference in HMM
ECE521 Lecture 19 HMM cont. Inference in HMM Outline Hidden Markov models Model definitions and notations Inference in HMMs Learning in HMMs 2 Formally, a hidden Markov model defines a generative process
More informationClassification Methods II: Linear and Quadratic Discrimminant Analysis
Classification Methods II: Linear and Quadratic Discrimminant Analysis Rebecca C. Steorts, Duke University STA 325, Chapter 4 ISL Agenda Linear Discrimminant Analysis (LDA) Classification Recall that linear
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationDS-GA 1002 Lecture notes 12 Fall Linear regression
DS-GA Lecture notes 1 Fall 16 1 Linear models Linear regression In statistics, regression consists of learning a function relating a certain quantity of interest y, the response or dependent variable,
More informationGraphical Models for Collaborative Filtering
Graphical Models for Collaborative Filtering Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Sequence modeling HMM, Kalman Filter, etc.: Similarity: the same graphical model topology,
More informationHeeyoul (Henry) Choi. Dept. of Computer Science Texas A&M University
Heeyoul (Henry) Choi Dept. of Computer Science Texas A&M University hchoi@cs.tamu.edu Introduction Speaker Adaptation Eigenvoice Comparison with others MAP, MLLR, EMAP, RMP, CAT, RSW Experiments Future
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 3 Linear
More informationEconomics 2010c: Lectures 9-10 Bellman Equation in Continuous Time
Economics 2010c: Lectures 9-10 Bellman Equation in Continuous Time David Laibson 9/30/2014 Outline Lectures 9-10: 9.1 Continuous-time Bellman Equation 9.2 Application: Merton s Problem 9.3 Application:
More informationHidden Markov models
Hidden Markov models Charles Elkan November 26, 2012 Important: These lecture notes are based on notes written by Lawrence Saul. Also, these typeset notes lack illustrations. See the classroom lectures
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Brown University CSCI 2950-P, Spring 2013 Prof. Erik Sudderth Lecture 13: Learning in Gaussian Graphical Models, Non-Gaussian Inference, Monte Carlo Methods Some figures
More informationCS8803: Statistical Techniques in Robotics Byron Boots. Hilbert Space Embeddings
CS8803: Statistical Techniques in Robotics Byron Boots Hilbert Space Embeddings 1 Motivation CS8803: STR Hilbert Space Embeddings 2 Overview Multinomial Distributions Marginal, Joint, Conditional Sum,
More informationIntroduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin
1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture 12 Dynamical Models CS/CNS/EE 155 Andreas Krause Homework 3 out tonight Start early!! Announcements Project milestones due today Please email to TAs 2 Parameter learning
More informationMathematical Formulation of Our Example
Mathematical Formulation of Our Example We define two binary random variables: open and, where is light on or light off. Our question is: What is? Computer Vision 1 Combining Evidence Suppose our robot
More informationApproximate Inference
Approximate Inference Simulation has a name: sampling Sampling is a hot topic in machine learning, and it s really simple Basic idea: Draw N samples from a sampling distribution S Compute an approximate
More informationDimension Reduction (PCA, ICA, CCA, FLD,
Dimension Reduction (PCA, ICA, CCA, FLD, Topic Models) Yi Zhang 10-701, Machine Learning, Spring 2011 April 6 th, 2011 Parts of the PCA slides are from previous 10-701 lectures 1 Outline Dimension reduction
More informationLecture 7: Con3nuous Latent Variable Models
CSC2515 Fall 2015 Introduc3on to Machine Learning Lecture 7: Con3nuous Latent Variable Models All lecture slides will be available as.pdf on the course website: http://www.cs.toronto.edu/~urtasun/courses/csc2515/
More informationAn Introduction to Independent Components Analysis (ICA)
An Introduction to Independent Components Analysis (ICA) Anish R. Shah, CFA Northfield Information Services Anish@northinfo.com Newport Jun 6, 2008 1 Overview of Talk Review principal components Introduce
More informationA Bayesian Perspective on Residential Demand Response Using Smart Meter Data
A Bayesian Perspective on Residential Demand Response Using Smart Meter Data Datong-Paul Zhou, Maximilian Balandat, and Claire Tomlin University of California, Berkeley [datong.zhou, balandat, tomlin]@eecs.berkeley.edu
More informationHidden Markov Models,99,100! Markov, here I come!
Hidden Markov Models,99,100! Markov, here I come! 16.410/413 Principles of Autonomy and Decision-Making Pedro Santana (psantana@mit.edu) October 7 th, 2015. Based on material by Brian Williams and Emilio
More informationComputation. For QDA we need to calculate: Lets first consider the case that
Computation For QDA we need to calculate: δ (x) = 1 2 log( Σ ) 1 2 (x µ ) Σ 1 (x µ ) + log(π ) Lets first consider the case that Σ = I,. This is the case where each distribution is spherical, around the
More informationFast Linear Algorithms for Machine Learning
University of Pennsylvania ScholarlyCommons Publicly Accessible Penn Dissertations 1-1-2015 Fast Linear Algorithms for Machine Learning Yichao Lu University of Pennsylvania, luyichao1123@gmail.com Follow
More informationFactor Models for Asset Returns. Prof. Daniel P. Palomar
Factor Models for Asset Returns Prof. Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) MAFS6010R- Portfolio Optimization with R MSc in Financial Mathematics Fall 2018-19, HKUST,
More informationLECTURE NOTE #3 PROF. ALAN YUILLE
LECTURE NOTE #3 PROF. ALAN YUILLE 1. Three Topics (1) Precision and Recall Curves. Receiver Operating Characteristic Curves (ROC). What to do if we do not fix the loss function? (2) The Curse of Dimensionality.
More informationCS 7180: Behavioral Modeling and Decision- making in AI
CS 7180: Behavioral Modeling and Decision- making in AI Learning Probabilistic Graphical Models Prof. Amy Sliva October 31, 2012 Hidden Markov model Stochastic system represented by three matrices N =
More information