An Optimization Approach In Information Security Risk Management
|
|
- Milo Sharp
- 6 years ago
- Views:
Transcription
1 Advances in Management & Applied Economics, vol.2, no.3, 2012, 1-12 ISSN: (print version), (online) Scienpress Ltd, 2012 An Optimization Approach In Information Security Risk Management Marn-ling Shing 1, Chen-chi Shing 2, Lee-pin Shing 3 and Lee-hur Shing 4 Abstract In order to protect enterprise networks from unauthorized access and malicious attacks, a few risk models were proposed. This paper proposes to use optimization approach for a business to make decision based on certain budget constraint within a certain years. JEL classification numbers: G32 Keywords: risk assessment; risk management; information security model; risk model 1 Early Child Education Department and Institute of Child Development, Taipei Municipal University of Education, 1 Ai-Kuo West Road, Taipei, Taiwan, R.O.C., shing@tmue.edu.tw 2 Information Technology Department, Radford University, Box 6933, Radford, VA 24142, cshing@radford.edu 3 Virginia Tech, Blacksburg, VA, shingle@vt.edu 4 Virginia Tech, Blacksburg, VA, leehurshing@yahoo.com Article Info: Received : February 5, Revised : April 3, 2012 Published online : August 31, 2012
2 2 An Optimization Approach In Information Security Risk Management 1 Introduction Today most enterprise networks are web-based systems or connected to Internet. It makes the enterprise networks more vulnerable to unauthorized access, malicious attacks, or denial of services. One of the shocking cases is the TJX security breach in 2006 [9]. The TJX, the retail giant, eventually has to settle with most banks and banking associations for 40.9 millions. For business owner and enterprise managers, these kinds of events cannot be completely stopped. By looking back the records of last twenty, we can find the enterprise systems of some high-profile companies such as Yahoo and ebay have been compromised. However, the damage to companies operations and finance cannot be ignored. Business today treated security attacks or breaches as an accident similar to a nature disaster. Traditional defense mechanism includes firewalls, intrusion detection system (IDS), and anti-virus programs. Firewalls could be a hardware devise or a software program. Most firewalls use packet filtering techniques to inspect the incoming packets and decide to accept or reject the incoming packet. The network systems also use authentication and cryptographic techniques to control the access. Antivirus programs check the potential malicious viruses or worms but they are based on the known virus attacks. Cryptographic techniques include encryption, digital signature, and digital certificates. IDS uses different algorithms to filter possible suspicious attacks but sometimes it may be a false alarm. Similar to the algorithms of IDS, the paper strives to develop an approach to predict the behavior of the systems. If the suspicious behaviors are discovered, the systems will send a warning signal to the network administrators. The administrator will decide the appropriate actions to avoid possible false alarm. On December 2006, TJX company detected a suspicious software on its computer system and later it was confirmed an intrusion and data loss. TJX did not discover the suspicious behavior until several months later [9]. Taking risk is an everyday s action in business. Before making decision, the executive must perform a risk analysis to reach to the optimal decision. In information security area the risk assessment or risk analysis is defined as analysis of the potential threats against an asset and the likelihood that they will materialize. [4] Probabilistic risk assessment (PRA) (or probabilistic safety assessment/ analysis) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity. Risk in a PRA is defined as a feasible detrimental outcome of an activity or action. In a PRA, risk is characterized by two quantities: 1. the magnitude (severity) of the possible adverse consequence(s), and 2. the likelihood (probability) of occurrence of each consequence.
3 Marn-ling Shing, Chen-chi Shing, Lee-pin Shing and Lee-hur Shing 3 Consequences are expressed numerically (e.g., the number of people potentially hurt or killed) and their likelihoods of occurrence are expressed as probabilities or frequencies (i.e., the number of occurrences or the probability of occurrence per unit time). The total risk is the expected loss: the sum of the products of the consequences multiplied by their probabilities. [10]. Risk analysis starts with understanding what assets are potentially at risk, and indentify what the possible threats are, and find out what are the possible vulnerabilities that could be exploited. The risk assessment can provide a way to protect the assets based on limited resources. Even though a risk assessment has been done and action has been taken to protect the assists, the risk assessment has to be updated due to environment change such as hardware, software or personnel changes. Therefore, risk assessment must be monitored all the time. However, there exists no good tool to monitor those changes for all possible changes. A semi-markov chain allows any distribution on state changes and it is very powerful to be used to monitor state changes dynamically with time changes. This paper emphasizes on how the likelihood of the risk propagates throughout the time and how to monitor the risk based on semi-markov chain models. In the next few sections we will first introduce Markov chain models and their properties. And a simulation was used to study the risk propagation through time. Finally, a statistical hypothesis testing was use to validate the model. 2 Semi-Markov Chain Model We can use a probability model (such as Markov chain) to predict the possible security condition of the enterprise systems. A Markov chain is a stochastic process in which the probability of a system state depends only on the previous state, not on the previous history of getting to the previous state. If the states and their transitions at discrete points in time are discrete, it is called a Markov chain [3]. In other words, a stochastic process M(t) is a Markov chain if at any n time points t 1 <t 2 < <t n, there are n corresponding states m 1, m 2,, m n, the probability that the system is at state m n at time t n, given that the system was at state m n-1 at time t n-1 etc and the system was at state m 1 at time t 1 is equal to the probability that the system is at state m n at time t n, given that the system was at state m n-1 at time t n-1. That is, P(M(t n )=m n M(t n-1 )=m n-1,, M(t 1 )=m 1 )= P(M(t n )=m n M(t n-1 )=m n-1 ), where m i is the state of the process at time t i (t 1 t i t n ), i=1,, n. Thus, the probability that the Markov chain is in state m n at t n, depends only on the previous state m n-1 at t n-1 [1]. Suppose p(0) represents the column vector of the probability that the system is in one of those n states at time 0,
4 4 An Optimization Approach In Information Security Risk Management p1(0) p(0)= p2(0) n,... pi( 0) =1, i1 pn(0) where p i (0) represents the probability of the system is in state i at time 0.Then the probability that the system is in one of those n states at time 1 is represented by p(1), p1(1) p(1)= p2(1) n,... pi( 1) =1, i1 pn(1) where p i (1) represents the probability of the system is in state i at time 1. And p(1)=t p(0), where T is the transpose matrix of the transition probability matrix, p11 p21... pn1 n T = p12 p22... pn2, pij =1, for i =1,2,,n (1) j1 p1n p2n... pnn and p ij is the probability of the system in the state j, give it was in the state i. Suppose the probability that the system is in one of those n states at time s is represented by p(s), p1( s) p(s)= p2( s)... pn( s) where p i (s) represents the probability of the system is in state i at time s. Then p(s)=t (T ( (T p(0))))=t s p(0), where T is the transition probability matrix, That is, p (0)= p (s) T s. For example, suppose that there are two possible threats to a company s network system. The first one is the intruder through wire network and the second is the intruder through wireless network. If we represent each state as a threat, then there are two states in a state vector. Suppose that in the first year there are 80% of intruders through the wire network and the rest through the wireless network, the security of a company s network system on year 1, the initial state, can be represented by a transpose vector p (1) of p(1): p '(1) Suppose that if the company was intruded through wire network, there will have 60% of chance that intruder through wire attack in next year even though
5 Marn-ling Shing, Chen-chi Shing, Lee-pin Shing and Lee-hur Shing 5 some patches were done. In the mean time there will have 40% of chance that wireless attack may happen in the next year. On the other hand if the company was intruded through wireless network, there will have 90% of chance that intruder through wire attack in next year even though some patches were done. And there will have 10% of chance that wireless attack may still happen in the next year. That is, the probability transition matrix T is T (2) The security on the year 2 can be predicted by: p'(2) p'(1) T Thus, there is an 66% chance that intruding through wire in year 2 and 34% chance that intruding through wireless network. The state vector in year 3 can be predicted in the same way: p'(3) p'(1) T General rules for period n are [1]: p '( n) p'( n 1) T n p '( n) p'(1) T In general, p ij in each row of the matrix T ' could be from any probability distribution. In fact, a semi-markov chain can be defined as a stochastic process that can have an arbitrary distribution between state changes [6]. A variety of different distributions will be used in the simulation experiment. Definition 2.1 A semi-markov process is a stochastic process that has an arbitrary distribution between state changes given it is in the current state. In the next section, we will describe how to design pre and post test assessment instruments to collect the necessary data for analysis and show all the pre-test questions used in this study. 3 Properties In this section we will investigate when a stochastic system will reach to a steady state (or equilibrium state) in the long run. We first introduce the meaning of each state to be a recurrent non-null and aperiodic.
6 6 An Optimization Approach In Information Security Risk Management Definition. 3.1 A state is recurrent if a state will return back to itself with the probability one after state transitions. If the state is not recurrent, then it is a transient. If a recurrent state is called recurrent nonnull if the mean time to return to itself is finite. A recurrent state is a recurrent null if the mean time return to itself is infinite. A recurrent state is aperiodic if for some number k, there is a way to return back in k, k+1, k+2, transitions. A recurrent state is called periodic if it is not aperiodic. Definition 3.2 A semi-markov chain is irreducible if all states are reachable from all other states. It is recurrent nonnull if all its states are recurrent nonnull. It is aperiodic if all its states are aperiodic. Definition 3.3 If a semi-markov chain is called ergodic, then it is irreducible, recurrent nonnull and aperiodic. Property 3.1 If a semi-markov chain is ergodic, then there exists a unique steadystate or equilibrium probability state. Depending on the structure of the transition probability matrix, it may not have any steady state exists. For example, a symmetric random walk process which has p=0.5, is periodic [5]. Property 3.2 For any semi-markov chain, if all the entries of its transition probability matrix are non-zero, then it is recurrent nonnull and aperiodic. Proof: Since all the entries of its transition probability matrix are non-zero (and also positive), every entry of the n th power of its transition probability matrix are also non-zero. Additionally, any state can go to any other state in any step. Therefore, a state will return back to its state after time n with probability one and it is recurrent. The mean return time is finite and it is recurrent nonnull. If it can return back in k steps, then it can also return back in one more step and it is aperiodic. Corollary 3.1 For any semi-markov chain, if every entry of its transition probability matrix has non-zero in all the entries, then it has an equilibrium state. Proof: Since every entry of its transition probability matrix has non-zero, every entry of positive integer power of the transition probability matrix has non-zero in all the entries, and all states are reachable from all other states after n steps and the semi-markov chain is irreducible. By Eq. 2.1 the semi-markov chain is also recurrent nonnull and aperiodic. Therefore, it is ergodic and by Eq. 2.1 there exists a unique steady-state. Corollary 3.2 The equilibrium state described in Corollary 3.1 is the eigenvector of T, the transpose of the transition probability matrix, of eigenvalues 1. Proof: The steady-state p(n) satisfies
7 Marn-ling Shing, Chen-chi Shing, Lee-pin Shing and Lee-hur Shing 7 p(n)=t p(n) or (T -I) p(n)=0, where I is the identity matrix and 0 is a zero-column vector. That is, p(n) is the eigenvector of eigenvalues 1. Example 3.1 In the matrix T of (2), T ' T ' I The steady state r s ', where r s 1, satisfies r s (T-I) 0 0. That is, 0.4r0.9s 0 and r s 1 imply r 9/13 and s 4/13. Hence, the steady state is 9 /13 4 /14 '. In the next section, we will develop a risk model that meets certain criteria. 4 Optimization Model Suppose that within n years there are s possible threats exist and some chances of damages for a company. The company wants to choose the beast policy that meets its budget. We may model those threats as one of the states in a state vector in a semi-markov chain model. That is, the state row vector at time i, q = [q 1i, q 2i,, q si ] represents those concerned threats combined with different probability. And assuming security installation has total investment value V ji for threat j at time i. Let D ji be the damage cost created by threat j at certain time i. Then the cost at time I, C i can be represented by s C i ( Dq V ) ji ji ji (3) j1 Suppose we observe those costs in total time n, the total cost T n must be within a given budget B, i.e., maximize n such that T n < B, where T n n i1 Ci (4) Note that the steady state vector q satisfies not only (1) but also by Corollary 3.2, (T -I) q = 0 (5)
8 8 An Optimization Approach In Information Security Risk Management If there is no risk knowledge to know which transition probability matrix appears in each year, we may use expected total cost to estimate yearly cost for m years costs and still meet the budget. That is, maximize n such that n* E(T n ) < B. For example, we will consider security policies in three years. Suppose that we choose the policy A in the example that was used in the above section if the total security budget for the company is B= 26k. And assume that in the year 1 the damage for intruder through wire network is $13K and $1K if through wireless network. That is, D 11 =13000 and D 21 =1000. q 11 =9/13 and q 21 = 4/13 Furthermore, suppose that the investment for fixing the damage caused by the wire intruder is $2K and for the wireless intruder is $4K, that is, V 11 =2000 and V 21 = Therefore, C 1 = (9+2)+(4/13+4)K= $ Suppose that in the year 2, we choose a different policy B that have T ' Then the steady state in the year 2 is [q 12 = 5/6 q 22 = 1/6]. If the damage for intruder through wire network is $8K and it is $4K if through wireless network. That is, D 12 =8000 and D 22 =4000. Furthermore, suppose that the investment for fixing the damage caused by the wire intruder is $2K and for the wireless intruder is $3K, that is, V 12 =2000 and V 22 = Therefore, C 2 = = Suppose that in the year 3, we choose a different policy C that have T ' Then the steady state in the year 3 is [q 12 = 7/9 q 22 = 2/9]. If the damage for intruder through wire network is $9K and it is $2K if through wireless network. That is, D 13 =9000 and D 23 =2000. Furthermore, suppose that the investment for fixing the damage caused by the wire intruder is $1K and for the wireless intruder is $2K, that is, V 12 =1000 and V 22 = Therefore, C 3 = = Therefore if we know all risk information in those three years, then we should adopt policy A only in the first year to meet the budget. If we would choose policy B or C, we would have had over the budget. However, if we don t have those risk information appeared in each year and if each year the risk is randomly appears, then the average cost per year is E(T 3 )= and we can cover two years instead of only one year within the budget. 5 Simulation Results In order to generate the transition probability for a semi-markov chain that satisfies (2), a truncated distribution is generated. All entries in a transition
9 Marn-ling Shing, Chen-chi Shing, Lee-pin Shing and Lee-hur Shing 9 probability matrix must be non-negative. In addition, the sums of the entries of every row of a transition probability matrix are ones. To simplify the problem, only four states (or four threats) were used and initially the risk in each state is equally possible. There are four different distributions are used for the transition probability matrix from every state to all four possible states in the simulation. They are uniform distribution, exponential distribution of mean 0.5, standard normal distribution and Weibull distribution with ν=0, α=0.5, and β (β=1) [2]. To generate the transition probability matrix for each distribution, three random variates were generated first. Then they are sorted and the probability of the value less than or equal to the random variates were calculated. To guarantee that the generated distribution satisfying (1), the probability for the last state was created by subtracting the sum of those generated probabilities from one. Because every entry of the transition probability matrices is non-zero, the initial state will reach to the steady state. The steady state is obtained by solving a system of linear equations that satisfying (1) and (5) using Gaussian Elimination [7]. It presents a sample of such run of the semi-markov chain. It reaches a steady state after time one million units. It is assumed that there are three years considered in the simulation. In each year the damage costs for four treats are 1000, 2000, 3000 and In addition, the investment costs for four threats are 4000, 3000, 2000 and A simulation result for the uniform distribution in the probability transition is shown in Table 1 below. Table 1: A simulation result using uniform distribution Year 1 Year 2 Year 3 Run Run Run Run Run Run Run Run Run Run Average The results in Table 1 will be displayed in Figure 1 below.
10 10 An Optimization Approach In Information Security Risk Management Figure 1: Yearly cost using uniform distribution The results for all four distributions are shown in Figure 2 below. Figure 2: Average cost using four distributions And the total costs are shown in Figure 3 below.
11 Marn-ling Shing, Chen-chi Shing, Lee-pin Shing and Lee-hur Shing 11 Figure 3: Total cost using four distributions We can see that using Weibull distribution total cost is optimal. 6 Conclusion The simple model proposed in this paper can help a company to make its decision under different possible threats with different damages and investments. ACKNOWLEDGEMENTS. This paper is in memory of my mother for her life time support and encouragement. References [1] M. Aburdene, Computer Simulation of Dynamic Systems, Wm. C. Brown Publishing, Dubuque, IA, [2] J. Banks, J. Carson and B. Nelson, Discrete Event System Simulation, New Jersey, Prentice Hall, [3] N. Bhat, Elements of Applied Stochastic Processes, John Wiley & Sons, New York, [4] M. Bishop, Computer Security, Addison-Wesley/Prentice-Hall, Boston, MA, 2003 [5] P. Bremaud, Markov Chains. Springer, New York, 1999.
12 12 An Optimization Approach In Information Security Risk Management [6] M. Molloy, Fundamentals of Performance Modeling, Macmillan Publishing, New York, [7] J. Mathews, Numerical Methods. Prentice Hall, New Jersey, [8] R. Panko, Corporate Computer and Network Security, Prentice Hall, 2 nd edition, New Jersey, [9] A. Rencher, Methods of Multivariate Analysis, John Wiley & Sons, New York, [10] M. Shing, C. Shing, K. Chen and H. Lee, Security Modeling on the Supply Chain Networks, Journal of Systemics, Cybernetics and Informatics, 5(5), (2008),
MATH 56A: STOCHASTIC PROCESSES CHAPTER 1
MATH 56A: STOCHASTIC PROCESSES CHAPTER. Finite Markov chains For the sake of completeness of these notes I decided to write a summary of the basic concepts of finite Markov chains. The topics in this chapter
More information8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains
8. Statistical Equilibrium and Classification of States: Discrete Time Markov Chains 8.1 Review 8.2 Statistical Equilibrium 8.3 Two-State Markov Chain 8.4 Existence of P ( ) 8.5 Classification of States
More informationMarkov Chains. X(t) is a Markov Process if, for arbitrary times t 1 < t 2 <... < t k < t k+1. If X(t) is discrete-valued. If X(t) is continuous-valued
Markov Chains X(t) is a Markov Process if, for arbitrary times t 1 < t 2
More informationReal time detection through Generalized Likelihood Ratio Test of position and speed readings inconsistencies in automated moving objects
Real time detection through Generalized Likelihood Ratio Test of position and speed readings inconsistencies in automated moving objects Sternheim Misuraca, M. R. Degree in Electronics Engineering, University
More informationMarkov Chains, Stochastic Processes, and Matrix Decompositions
Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral
More informationMarkov Chains, Random Walks on Graphs, and the Laplacian
Markov Chains, Random Walks on Graphs, and the Laplacian CMPSCI 791BB: Advanced ML Sridhar Mahadevan Random Walks! There is significant interest in the problem of random walks! Markov chain analysis! Computer
More informationDiscrete time Markov chains. Discrete Time Markov Chains, Limiting. Limiting Distribution and Classification. Regular Transition Probability Matrices
Discrete time Markov chains Discrete Time Markov Chains, Limiting Distribution and Classification DTU Informatics 02407 Stochastic Processes 3, September 9 207 Today: Discrete time Markov chains - invariant
More informationStochastic Petri Net
Stochastic Petri Net Serge Haddad LSV ENS Cachan & CNRS & INRIA haddad@lsv.ens-cachan.fr Petri Nets 2013, June 24th 2013 1 Stochastic Petri Net 2 Markov Chain 3 Markovian Stochastic Petri Net 4 Generalized
More informationQuantum Wireless Sensor Networks
Quantum Wireless Sensor Networks School of Computing Queen s University Canada ntional Computation Vienna, August 2008 Main Result Quantum cryptography can solve the problem of security in sensor networks.
More informationMarkov chains. Randomness and Computation. Markov chains. Markov processes
Markov chains Randomness and Computation or, Randomized Algorithms Mary Cryan School of Informatics University of Edinburgh Definition (Definition 7) A discrete-time stochastic process on the state space
More informationMarkov Processes Hamid R. Rabiee
Markov Processes Hamid R. Rabiee Overview Markov Property Markov Chains Definition Stationary Property Paths in Markov Chains Classification of States Steady States in MCs. 2 Markov Property A discrete
More informationRECURSION EQUATION FOR
Math 46 Lecture 8 Infinite Horizon discounted reward problem From the last lecture: The value function of policy u for the infinite horizon problem with discount factor a and initial state i is W i, u
More informationSession-Based Queueing Systems
Session-Based Queueing Systems Modelling, Simulation, and Approximation Jeroen Horters Supervisor VU: Sandjai Bhulai Executive Summary Companies often offer services that require multiple steps on the
More informationReinforcement Learning
Reinforcement Learning March May, 2013 Schedule Update Introduction 03/13/2015 (10:15-12:15) Sala conferenze MDPs 03/18/2015 (10:15-12:15) Sala conferenze Solving MDPs 03/20/2015 (10:15-12:15) Aula Alpha
More informationMARKOV PROCESSES. Valerio Di Valerio
MARKOV PROCESSES Valerio Di Valerio Stochastic Process Definition: a stochastic process is a collection of random variables {X(t)} indexed by time t T Each X(t) X is a random variable that satisfy some
More informationMarkov Chains and Stochastic Sampling
Part I Markov Chains and Stochastic Sampling 1 Markov Chains and Random Walks on Graphs 1.1 Structure of Finite Markov Chains We shall only consider Markov chains with a finite, but usually very large,
More informationIR: Information Retrieval
/ 44 IR: Information Retrieval FIB, Master in Innovation and Research in Informatics Slides by Marta Arias, José Luis Balcázar, Ramon Ferrer-i-Cancho, Ricard Gavaldá Department of Computer Science, UPC
More informationRecap. Probability, stochastic processes, Markov chains. ELEC-C7210 Modeling and analysis of communication networks
Recap Probability, stochastic processes, Markov chains ELEC-C7210 Modeling and analysis of communication networks 1 Recap: Probability theory important distributions Discrete distributions Geometric distribution
More informationMarkov Chain Model for ALOHA protocol
Markov Chain Model for ALOHA protocol Laila Daniel and Krishnan Narayanan April 22, 2012 Outline of the talk A Markov chain (MC) model for Slotted ALOHA Basic properties of Discrete-time Markov Chain Stability
More informationMonopoly An Analysis using Markov Chains. Benjamin Bernard
Monopoly An Analysis using Markov Chains Benjamin Bernard Columbia University, New York PRIME event at Pace University, New York April 19, 2017 Introduction Applications of Markov chains Applications of
More informationCS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions
CS145: Probability & Computing Lecture 18: Discrete Markov Chains, Equilibrium Distributions Instructor: Erik Sudderth Brown University Computer Science April 14, 215 Review: Discrete Markov Chains Some
More informationFrom Stochastic Processes to Stochastic Petri Nets
From Stochastic Processes to Stochastic Petri Nets Serge Haddad LSV CNRS & ENS Cachan & INRIA Saclay Advanced Course on Petri Nets, the 16th September 2010, Rostock 1 Stochastic Processes and Markov Chains
More informationStochastic Processes. Theory for Applications. Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS
Stochastic Processes Theory for Applications Robert G. Gallager CAMBRIDGE UNIVERSITY PRESS Contents Preface page xv Swgg&sfzoMj ybr zmjfr%cforj owf fmdy xix Acknowledgements xxi 1 Introduction and review
More informationMarkov Chains. Arnoldo Frigessi Bernd Heidergott November 4, 2015
Markov Chains Arnoldo Frigessi Bernd Heidergott November 4, 2015 1 Introduction Markov chains are stochastic models which play an important role in many applications in areas as diverse as biology, finance,
More informationMarkov Chains Handout for Stat 110
Markov Chains Handout for Stat 0 Prof. Joe Blitzstein (Harvard Statistics Department) Introduction Markov chains were first introduced in 906 by Andrey Markov, with the goal of showing that the Law of
More informationarxiv: v1 [cs.cr] 20 Dec 2012
Modeling and Performance Evaluation of Computer Systems Security Operation D. Guster N. K. Krivulin arxiv:1212.5289v1 [cs.cr] 20 Dec 2012 Abstract A model of computer system security operation is developed
More informationDiscrete Markov Chain. Theory and use
Discrete Markov Chain. Theory and use Andres Vallone PhD Student andres.vallone@predoc.uam.es 2016 Contents 1 Introduction 2 Concept and definition Examples Transitions Matrix Chains Classification 3 Empirical
More informationINTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING
INTRODUCTION TO MARKOV CHAINS AND MARKOV CHAIN MIXING ERIC SHANG Abstract. This paper provides an introduction to Markov chains and their basic classifications and interesting properties. After establishing
More informationStochastic processes. MAS275 Probability Modelling. Introduction and Markov chains. Continuous time. Markov property
Chapter 1: and Markov chains Stochastic processes We study stochastic processes, which are families of random variables describing the evolution of a quantity with time. In some situations, we can treat
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationCyber-Awareness and Games of Incomplete Information
Cyber-Awareness and Games of Incomplete Information Jeff S Shamma Georgia Institute of Technology ARO/MURI Annual Review August 23 24, 2010 Preview Game theoretic modeling formalisms Main issue: Information
More informationMarkov chains (week 6) Solutions
Markov chains (week 6) Solutions 1 Ranking of nodes in graphs. A Markov chain model. The stochastic process of agent visits A N is a Markov chain (MC). Explain. The stochastic process of agent visits A
More informationSection Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018
Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections
More informationR E A D : E S S E N T I A L S C R U M : A P R A C T I C A L G U I D E T O T H E M O S T P O P U L A R A G I L E P R O C E S S. C H.
R E A D : E S S E N T I A L S C R U M : A P R A C T I C A L G U I D E T O T H E M O S T P O P U L A R A G I L E P R O C E S S. C H. 5 S O F T W A R E E N G I N E E R I N G B Y S O M M E R V I L L E S E
More informationSecure Control Against Replay Attacks
Secure Control Against Replay Attacks Bruno Sinopoli, Yilin Mo Department of Electrical and Computer Engineering, Carnegie Mellon Trust Autumn 2009 Conference Bruno Sinopoli (Carnegie Mellon) Secure Control
More informationLecture 11: Introduction to Markov Chains. Copyright G. Caire (Sample Lectures) 321
Lecture 11: Introduction to Markov Chains Copyright G. Caire (Sample Lectures) 321 Discrete-time random processes A sequence of RVs indexed by a variable n 2 {0, 1, 2,...} forms a discretetime random process
More informationOutlines. Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC)
Markov Chains (2) Outlines Discrete Time Markov Chain (DTMC) Continuous Time Markov Chain (CTMC) 2 pj ( n) denotes the pmf of the random variable p ( n) P( X j) j We will only be concerned with homogenous
More informationCDA6530: Performance Models of Computers and Networks. Chapter 3: Review of Practical Stochastic Processes
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic process X = {X(t), t2 T} is a collection of random variables (rvs); one rv
More informationA Markov model for estimating the remaining life of electrical insulation in distribution transformer
AMERICAN JOURNAL OF SCIENTIFIC AND INDUSTRIAL RESEARCH 2010, Science Huβ, http://www.scihub.org/ajsir ISSN: 2153-649X doi:10.5251/ajsir.2010.1.3.539.548 A Markov model for estimating the remaining life
More informationUsing Markov Chains To Model Human Migration in a Network Equilibrium Framework
Using Markov Chains To Model Human Migration in a Network Equilibrium Framework Jie Pan Department of Mathematics and Computer Science Saint Joseph s University Philadelphia, PA 19131 Anna Nagurney School
More informationWeek 5: Markov chains Random access in communication networks Solutions
Week 5: Markov chains Random access in communication networks Solutions A Markov chain model. The model described in the homework defines the following probabilities: P [a terminal receives a packet in
More informationMotivation for introducing probabilities
for introducing probabilities Reaching the goals is often not sufficient: it is important that the expected costs do not outweigh the benefit of reaching the goals. 1 Objective: maximize benefits - costs.
More informationBroadband Internet Access Disclosure
Broadband Internet Access Disclosure This document provides information about the network practices, performance characteristics, and commercial terms applicable broadband Internet access services provided
More informationStatistics 433 Practice Final Exam: Cover Sheet and Marking Sheet
Statistics 433 Practice Final Exam: Cover Sheet and Marking Sheet YOUR NAME INSTRUCTIONS: No notes, no calculators, and no communications devices are permitted. Please keep all materials away from your
More informationCDA5530: Performance Models of Computers and Networks. Chapter 3: Review of Practical
CDA5530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes Definition Stochastic ti process X = {X(t), t T} is a collection of random variables (rvs); one
More informationPage rank computation HPC course project a.y
Page rank computation HPC course project a.y. 2015-16 Compute efficient and scalable Pagerank MPI, Multithreading, SSE 1 PageRank PageRank is a link analysis algorithm, named after Brin & Page [1], and
More information= P{X 0. = i} (1) If the MC has stationary transition probabilities then, = i} = P{X n+1
Properties of Markov Chains and Evaluation of Steady State Transition Matrix P ss V. Krishnan - 3/9/2 Property 1 Let X be a Markov Chain (MC) where X {X n : n, 1, }. The state space is E {i, j, k, }. The
More informationLinear Algebra Application~ Markov Chains MATH 224
Linear Algebra Application~ Markov Chains Andrew Berger MATH 224 11 December 2007 1. Introduction Markov chains are named after Russian mathematician Andrei Markov and provide a way of dealing with a sequence
More informationResearch Article Dynamical Models for Computer Viruses Propagation
Hindawi Publishing Corporation Mathematical Problems in Engineering Volume 28, Article ID 9426, 11 pages doi:1.11/28/9426 Research Article Dynamical Models for Computer Viruses Propagation José R. C. Piqueira
More informationInternal Audit Report
Internal Audit Report Right of Way Mapping TxDOT Internal Audit Division Objective To determine the efficiency and effectiveness of district mapping procedures. Opinion Based on the audit scope areas reviewed,
More informationStochastic Models: Markov Chains and their Generalizations
Scuola di Dottorato in Scienza ed Alta Tecnologia Dottorato in Informatica Universita di Torino Stochastic Models: Markov Chains and their Generalizations Gianfranco Balbo e Andras Horvath Outline Introduction
More informationLecture 7: Stochastic Dynamic Programing and Markov Processes
Lecture 7: Stochastic Dynamic Programing and Markov Processes Florian Scheuer References: SLP chapters 9, 10, 11; LS chapters 2 and 6 1 Examples 1.1 Neoclassical Growth Model with Stochastic Technology
More informationMATH 445/545 Homework 1: Due February 11th, 2016
MATH 445/545 Homework 1: Due February 11th, 2016 Answer the following questions Please type your solutions and include the questions and all graphics if needed with the solution 1 A business executive
More informationLinear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems
Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems July 2001 Revised: December 2005 Ronald J. Balvers Douglas W. Mitchell Department of Economics Department
More informationTopics in Probability Theory and Stochastic Processes Steven R. Dunbar. Notation and Problems of Hidden Markov Models
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Topics in Probability Theory
More informationLecture XI. Approximating the Invariant Distribution
Lecture XI Approximating the Invariant Distribution Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Invariant Distribution p. 1 /24 SS Equilibrium in the Aiyagari model G.
More informationAn application of hidden Markov models to asset allocation problems
Finance Stochast. 1, 229 238 (1997) c Springer-Verlag 1997 An application of hidden Markov models to asset allocation problems Robert J. Elliott 1, John van der Hoek 2 1 Department of Mathematical Sciences,
More informationStatistics 253/317 Introduction to Probability Models. Winter Midterm Exam Friday, Feb 8, 2013
Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Friday, Feb 8, 2013 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note
More informationMath 304 Handout: Linear algebra, graphs, and networks.
Math 30 Handout: Linear algebra, graphs, and networks. December, 006. GRAPHS AND ADJACENCY MATRICES. Definition. A graph is a collection of vertices connected by edges. A directed graph is a graph all
More informationCourse 495: Advanced Statistical Machine Learning/Pattern Recognition
Course 495: Advanced Statistical Machine Learning/Pattern Recognition Lecturer: Stefanos Zafeiriou Goal (Lectures): To present discrete and continuous valued probabilistic linear dynamical systems (HMMs
More informationProbability, Random Processes and Inference
INSTITUTO POLITÉCNICO NACIONAL CENTRO DE INVESTIGACION EN COMPUTACION Laboratorio de Ciberseguridad Probability, Random Processes and Inference Dr. Ponciano Jorge Escamilla Ambrosio pescamilla@cic.ipn.mx
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More informationAn Application of Graph Theory in Markov Chains Reliability Analysis
An Application of Graph Theory in Markov Chains Reliability Analysis Pavel SKALNY Department of Applied Mathematics, Faculty of Electrical Engineering and Computer Science, VSB Technical University of
More informationStochastic process. X, a series of random variables indexed by t
Stochastic process X, a series of random variables indexed by t X={X(t), t 0} is a continuous time stochastic process X={X(t), t=0,1, } is a discrete time stochastic process X(t) is the state at time t,
More informationData analysis and stochastic modeling
Data analysis and stochastic modeling Lecture 7 An introduction to queueing theory Guillaume Gravier guillaume.gravier@irisa.fr with a lot of help from Paul Jensen s course http://www.me.utexas.edu/ jensen/ormm/instruction/powerpoint/or_models_09/14_queuing.ppt
More informationPracticable Robust Markov Decision Processes
Practicable Robust Markov Decision Processes Huan Xu Department of Mechanical Engineering National University of Singapore Joint work with Shiau-Hong Lim (IBM), Shie Mannor (Techion), Ofir Mebel (Apple)
More informationSocial network analysis: social learning
Social network analysis: social learning Donglei Du (ddu@unb.edu) Faculty of Business Administration, University of New Brunswick, NB Canada Fredericton E3B 9Y2 October 20, 2016 Donglei Du (UNB) AlgoTrading
More informationECE-517: Reinforcement Learning in Artificial Intelligence. Lecture 4: Discrete-Time Markov Chains
ECE-517: Reinforcement Learning in Artificial Intelligence Lecture 4: Discrete-Time Markov Chains September 1, 215 Dr. Itamar Arel College of Engineering Department of Electrical Engineering & Computer
More informationLesson Plan. AM 121: Introduction to Optimization Models and Methods. Lecture 17: Markov Chains. Yiling Chen SEAS. Stochastic process Markov Chains
AM : Introduction to Optimization Models and Methods Lecture 7: Markov Chains Yiling Chen SEAS Lesson Plan Stochastic process Markov Chains n-step probabilities Communicating states, irreducibility Recurrent
More informationLecture notes for Analysis of Algorithms : Markov decision processes
Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with
More informationAnalysis of Safety at Quiet Zones
F E D E R A L R A I L R O A D A D M I N I S T R A T I O N Analysis of Safety at Quiet Zones 2014 Global Level Crossing Safety & Trespass Prevention Symposium Urbana, IL F E D E R A L R A I L R O A D A
More informationStatistics for IT Managers
Statistics for IT Managers 95-796, Fall 2012 Module 2: Hypothesis Testing and Statistical Inference (5 lectures) Reading: Statistics for Business and Economics, Ch. 5-7 Confidence intervals Given the sample
More informationOnline Social Networks and Media. Link Analysis and Web Search
Online Social Networks and Media Link Analysis and Web Search How to Organize the Web First try: Human curated Web directories Yahoo, DMOZ, LookSmart How to organize the web Second try: Web Search Information
More informationPOLICY: FI/PL/03 Issued on August 5, Fee Policy for GEF Partner Agencies
POLICY: FI/PL/03 Issued on August 5, 2012 Fee Policy for GEF Partner Agencies Summary: The proposed Agency fee structure in this paper is the decision of the GEF Council from its meeting of June 2012.
More informationDevelopment of a System for Decision Support in the Field of Ecological-Economic Security
Development of a System for Decision Support in the Field of Ecological-Economic Security Tokarev Kirill Evgenievich Candidate of Economic Sciences, Associate Professor, Volgograd State Agricultural University
More informationMarkov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains
Markov Chains A random process X is a family {X t : t T } of random variables indexed by some set T. When T = {0, 1, 2,... } one speaks about a discrete-time process, for T = R or T = [0, ) one has a continuous-time
More informationMarkov Model. Model representing the different resident states of a system, and the transitions between the different states
Markov Model Model representing the different resident states of a system, and the transitions between the different states (applicable to repairable, as well as non-repairable systems) System behavior
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationLIMITING PROBABILITY TRANSITION MATRIX OF A CONDENSED FIBONACCI TREE
International Journal of Applied Mathematics Volume 31 No. 18, 41-49 ISSN: 1311-178 (printed version); ISSN: 1314-86 (on-line version) doi: http://dx.doi.org/1.173/ijam.v31i.6 LIMITING PROBABILITY TRANSITION
More informationA Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models
Journal of Finance and Investment Analysis, vol.1, no.1, 2012, 55-67 ISSN: 2241-0988 (print version), 2241-0996 (online) International Scientific Press, 2012 A Non-Parametric Approach of Heteroskedasticity
More informationNeural, Parallel, and Scientific Computations 18 (2010) A STOCHASTIC APPROACH TO THE BONUS-MALUS SYSTEM 1. INTRODUCTION
Neural, Parallel, and Scientific Computations 18 (2010) 283-290 A STOCHASTIC APPROACH TO THE BONUS-MALUS SYSTEM KUMER PIAL DAS Department of Mathematics, Lamar University, Beaumont, Texas 77710, USA. ABSTRACT.
More informationTopics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij
Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix
More informationELEC633: Graphical Models
ELEC633: Graphical Models Tahira isa Saleem Scribe from 7 October 2008 References: Casella and George Exploring the Gibbs sampler (1992) Chib and Greenberg Understanding the Metropolis-Hastings algorithm
More informationStatistics 253/317 Introduction to Probability Models. Winter Midterm Exam Monday, Feb 10, 2014
Statistics 253/317 Introduction to Probability Models Winter 2014 - Midterm Exam Monday, Feb 10, 2014 Student Name (print): (a) Do not sit directly next to another student. (b) This is a closed-book, closed-note
More informationInformation Security in the Age of Quantum Technologies
www.pwc.ru Information Security in the Age of Quantum Technologies Algorithms that enable a quantum computer to reduce the time for password generation and data decryption to several hours or even minutes
More informationIrreducibility. Irreducible. every state can be reached from every other state For any i,j, exist an m 0, such that. Absorbing state: p jj =1
Irreducibility Irreducible every state can be reached from every other state For any i,j, exist an m 0, such that i,j are communicate, if the above condition is valid Irreducible: all states are communicate
More informationHigh-dimensional Markov Chain Models for Categorical Data Sequences with Applications Wai-Ki CHING AMACL, Department of Mathematics HKU 19 March 2013
High-dimensional Markov Chain Models for Categorical Data Sequences with Applications Wai-Ki CHING AMACL, Department of Mathematics HKU 19 March 2013 Abstract: Markov chains are popular models for a modelling
More informationOn Polynomial Cases of the Unichain Classification Problem for Markov Decision Processes
On Polynomial Cases of the Unichain Classification Problem for Markov Decision Processes Eugene A. Feinberg Department of Applied Mathematics and Statistics State University of New York at Stony Brook
More informationQUESTION ONE Let 7C = Total Cost MC = Marginal Cost AC = Average Cost
ANSWER QUESTION ONE Let 7C = Total Cost MC = Marginal Cost AC = Average Cost Q = Number of units AC = 7C MC = Q d7c d7c 7C Q Derivation of average cost with respect to quantity is different from marginal
More informationFinal Exam December 12, 2017
Introduction to Artificial Intelligence CSE 473, Autumn 2017 Dieter Fox Final Exam December 12, 2017 Directions This exam has 7 problems with 111 points shown in the table below, and you have 110 minutes
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationShortening Picking Distance by using Rank-Order Clustering and Genetic Algorithm for Distribution Centers
Shortening Picking Distance by using Rank-Order Clustering and Genetic Algorithm for Distribution Centers Rong-Chang Chen, Yi-Ru Liao, Ting-Yao Lin, Chia-Hsin Chuang, Department of Distribution Management,
More informationUndergraduate Research and Creative Practice
Grand Valley State University ScholarWorks@GVSU Honors Projects Undergraduate Research and Creative Practice 2014 The Application of Stochastic Processes in the Sciences: Using Markov Chains to Estimate
More informationDetection theory 101 ELEC-E5410 Signal Processing for Communications
Detection theory 101 ELEC-E5410 Signal Processing for Communications Binary hypothesis testing Null hypothesis H 0 : e.g. noise only Alternative hypothesis H 1 : signal + noise p(x;h 0 ) γ p(x;h 1 ) Trade-off
More informationNetwork Security Based on Quantum Cryptography Multi-qubit Hadamard Matrices
Global Journal of Computer Science and Technology Volume 11 Issue 12 Version 1.0 July Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online ISSN:
More informationGoogle PageRank. Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano
Google PageRank Francesco Ricci Faculty of Computer Science Free University of Bozen-Bolzano fricci@unibz.it 1 Content p Linear Algebra p Matrices p Eigenvalues and eigenvectors p Markov chains p Google
More informationManaging the quantum risk to cybersecurity. Global Risk Institute. Michele Mosca 11 April 2016
Managing the quantum risk to cybersecurity Global Risk Institute Michele Mosca 11 April 2016 Cyber technologies are becoming increasingly pervasive. Cybersecurity is a growing and fundamental part of safety
More informationPageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper)
PageRank: The Math-y Version (Or, What To Do When You Can t Tear Up Little Pieces of Paper) In class, we saw this graph, with each node representing people who are following each other on Twitter: Our
More informationLanguage Acquisition and Parameters: Part II
Language Acquisition and Parameters: Part II Matilde Marcolli CS0: Mathematical and Computational Linguistics Winter 205 Transition Matrices in the Markov Chain Model absorbing states correspond to local
More informationLet (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t
2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition
More information