Multiple Model Adaptive Controller for Partially-Observed Boolean Dynamical Systems
|
|
- Donna Bailey
- 6 years ago
- Views:
Transcription
1 Multiple Model Adaptive Controller for Partially-Observed Boolean Dynamical Systems Mahdi Imani and Ulisses Braga-Neto Abstract This paper is concerned with developing an adaptive controller for Partially-Observed Boolean Dynamical Systems (POBDS). Assuming that partial nowledge about the system can be modeled by a finite number of candidate models, then simultaneous identification and control of a POBDS is achieved using the combination of a state-feedbac controller and a Multiple-Model Adaptive Estimation (MMAE) technique. The proposed method contains two main steps: first, in the offline step, the stationary control policy for the underlying Boolean dynamical system is computed for each candidate model. Then, in the online step, an optimal Bayesian estimator is modeled using a ban of Boolean Kalman Filters (BKFs), each tuned to a candidate model. The result of the offline step along with the estimated state by the ban of BKFs specify the control input that should be applied at each time point. The performance of the proposed adaptive controller is investigated using a Boolean networ model constructed from melanoma gene expression data observed through RNA-seq measurements. I. INTRODUCTION The partially-observed Boolean dynamical system (POBDS) model provides a rich framewor for modeling and control of systems containing Boolean state variables observing through noisy measurements. Examples of applications of systems with Boolean states abound, including gene-regulatory networs [1], [2], robotics [3], digital communication systems [4], and more. Several tools for this signal model have been developed in recent years, such as the optimal filter and smoother based on the minimum mean-square error (MMSE) criterion, which are called the Boolean Kalman Filter [5] and Boolean Kalman Smoother [6], respectively. In addition, particle filtering implementations of these filters, as well as schemes for handling correlated Boolean noise, simultaneous state and parameter estimation, networ inference, and fault detection for POBDSs were developed in [7] [11]. Furthermore, the software tool BoolFilter [12] is freely available under R library for estimation and inference of partially-observed Boolean dynamical systems. Decision maing under various sources of uncertainty is an issue of great interests in various fields [13] [18]. Unlie various intervention approaches [19] [22] that have been developed in the context of Probabilistic Boolean Networs (PBNs) [23], S-systems [24], and Bayesian networs [25], which assume that the Boolean states of the system are *The authors acnowledge the support of the National Science Foundation, through NSF award CCF M. Imani and U. M. Braga-Neto are with the Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, USA m.imani88@tamu.edu, ulisses@ece.tamu.edu directly observable, state and output feedbac controllers are designed in [26] [28] to deal with partially-observed Boolean dynamical systems. The output-feedbac controller for POBDS in [27] was developed based on the well-nown point-based value iteration (PBVI) method. This method can only be applied to a POBDS with a finite measurement space. On the other hand, the state-feedbac controller for POBDS in [26] was designed based on optimal infinite horizon control of the underlying Boolean dynamical system using the BKF as state observer and, unlie PBVI for POBDS, it can be applied to any arbitrary measurement space. Notice that both these controllers require full nowledge about the POBDS. The goal of this paper is to obtain a controller when only partial information is available about the POBDS. For example, the Boolean Networ topology (e.g., the connection between nodes) may be incompletely nown, or noise and other measurement parameters may be liewise unavailable. We build our multiple model adaptive controller based on the state feedbac controller [26] and multiple model adaptive estimator [10], assuming that the unnown system parameters are contained in a finite set. First, in an offline step, the stationary control policy for each candidate is computed and then, a ban of BKFs running in parallel, each tuned to a candidate model, provides a fully adaptive estimate of both state and parameters. The result of the offline step along with the estimated state by the ban of BKFs specify the control input that should be applied at each time point. Performance is investigated using a melanoma regulatory networ and simulated RNA-seq measurements. II. PARTIALLY-OBSERVED BOOLEAN DYNAMICAL SYSTEMS We describe below the partially-observed Boolean dynamical systems (POBDS) model, first proposed in [5]. We assume that the system is described by a state process {X ; = 0, 1,...}, where X {0, 1} d is a Boolean vector of size d in the case of a gene regulatory networ, the components of X represent the activation/inactivation state of the genes at time. The state is affected by a sequence of control inputs {u ; = 0, 1,...}, where u U represents a purposeful intervention into the system state in the biological example, this might model drug applications. The sequence of states is observed indirectly through the observation process {Y ; = 1, 2,...}, where Y is a vector of (typically non-boolean) measurements. The states are assumed to be updated and observed at each time through
2 the following nonlinear signal model: X = f (X 1, u 1 ) n Y = h (X, v ) (state model) (observation model) for = 1, 2,..., where f {0, 1} d U {0, 1} d is a networ function, {n ; = 1, 2,...} is a white state noise process with n {0, 1} d, and indicates componentwise modulo-2 addition. The noise is white in the sense that n and n l are independent for l. In addition, the noise process is assumed to be independent from the state process and control input. A. Boolean Kalman Filter The optimal filtering problem consists of, given observations Y 1 = (Y 1,..., Y ) and control input u 0 1 = (u 0,..., u 1 ), finding an estimator ˆX = h(y 1, u 0 1 ) of the state X that minimizes the conditional mean-square error (MSE): MSE( ˆX Y 1, u 0 1 ) = E [ ˆX X 2 Y 1, u 0 1 ], at each time step 1 to. For a vector v of size d, define v 1 = d i=1 v(i), v {0, 1} d via v(i) = I v(i)>1/2 for i = 1,..., d, v c {0, 1} d via v c (i) = 1 v(i), for i = 1,..., d; where I v(i)>1/2 returns 1 if v(i) > 1/2 and 0 otherwise. The optimal MMSE filter ˆX is given by [5], [9] with optimal filtering MMSE (1) (2) ˆX MS = E [X Y 1, u 0 1 ], (3) MSE( ˆX MS Y 1, u 0 1 ) = min{e[x Y 1, u 0 1 ], E[X Y 1, u 0 1 ] c } 1, (4) where the minimum is computed componentwise. Both the optimal filter and its MSE can be computed by a recursive matrix-based procedure, called the Boolean Kalman Filter (BKF) [5], which is briefly described next. Let (x 1,..., x 2d ) be an arbitrary enumeration of the possible state vectors. Define the state conditional probability distribution vectors Π and Π 1 by Π (i) = P (X = x i Y 1, u 0 1 ), Π 1 (i) = P (X = x i Y 1 1, u 0 1 ), for i = 1,..., 2 d, and = 1, 2,.... We also define Π 0 0 to be the initial (prior) distribution of the states at time zero. Let the prediction matrix M (u) of size 2 d 2 d be the transition matrix of the controlled Marov chain under control input u defined by the state model: (M (u)) ij = P (X = x i X 1 = x j, u 1 = u) = P (n = f(x j, u) x i ), for i, j = 1,..., 2 d. Additionally, given a value of the observation vector Y at time, the update matrix T (Y ) (5) (6) of size 2 d 2 d is a diagonal matrix defined by: (T (Y )) ii = p (Y X = x i ), (7) for i = 1,..., 2 d, where p( ) denotes either a probability density function or a probability mass function, in the case of continuous or discrete measurements, respectively. Finally, define the matrix A of size d 2 d via A = [x 1 x 2d ]. It can be shown that the optimal MMSE estimator ˆX MS can be computed by Algorithm 1 [5], [9]. Algorithm 1 Boolean Kalman Filter 1: Initialization: (Π 0 0 ) i = P (X 0 = x i ), for i = 1,..., 2 d. For = 1, 2,..., do: 2: Prediction: Π 1 = M (u 1 ) Π 1 1 3: Update: β = T (Y ) Π 1 4: Filtered Distribution Vector: Π = β / β 1 5: MMSE Estimator Computation: ˆX MS with optimal conditional MSE = AΠ MSE( ˆX MS Y 1, u 0 1 ) = min{aπ,(aπ ) c } 1. III. STATE-FEEDBACK CONTROLLER FOR POBDS The state-feedbac controller for POBDS first introduced in [26] contains offline and online steps. In the offline step, the stationary control policy for the underlying Boolean dynamical system, with the assumption of direct observability of states, is computed, and then in the online step the designed control policy is applied to the system based on the estimated state by the BKF. The method is described briefly in the following paragraphs. The goal of infinite-horizon control of a Boolean dynamical system is to select the appropriate external input u U at each time to mae the system spend the least amount of time, on average, in undesirable states; e.g., states associated with cell proliferation in biological systems, which may be associated with cancer [20]. In formal terms, assuming a bounded cost of control g(x, u ), our goal is to find a stationary control policy µ {0, 1} d U, which minimizes the infinite-horizon cost (for a given initial state X 0 = x j 0 ): m J µ (j) = lim E γ m g (X, µ(x )), (8) =0 for j = 1,..., 2 d, where 0 < γ < 1 is a discounting factor that ensures that the limit of the finite sums converges as the horizon length m goes to infinity. We assume that the system prediction matrix M (u) can only depend on time through the control input u. We will
3 therefore drop the index and write simply M(u). Defining a mapping T as T [J](j) =min u U g(x j, u) + γ 2 d i=1 (M(u)) ij J(i), (9) the optimal stationary control policy can be obtained by starting with an arbitrary initial cost function J 0 {0, 1} d R, and running the following iteration J t = T [J t 1 ], (10) until a fixed point is obtained it can be shown that the iteration will indeed converge to a fixed point [30]. This fixed point is the optimal cost J R 2d, and the corresponding policy µ U 2d is the optimal stationary control policy. After obtaining the offline stationary control policy for the underlying BDS by using the value iteration method, the estimated state by the Boolean Kalman Filter is used for decision maing in the online process. If µ denotes the optimal stationary policy, then the control input at time is given simply by: u VBKF = µ ( ˆX MS ). (11) where ˆX MS is the estimated state at time obtained by the BKF given measurements Y 1 and sequence of control u 0 1, as described in the previous section. For more information, the reader is referred to [26]. IV. MULTIPLE MODEL ADAPTIVE CONTROLLER FOR POBDS The state-feedbac controller introduced in the previous section requires full nowledge about the POBDS. On the other hand, suppose that the nonlinear signal model in (1) is incompletely specified. For example, the deterministic functions f and h may be only partially nown, or the statistics of the noise processes n and v may need to be estimated. We assume that the missing information can be coded into a finite-dimensional parameter vector θ Θ, where Θ = {θ 1,..., θ M } is the parameter space. Control of such a partially-nown POBDS can be achieved by a combination of the state-feedbac controller introduced in section III and the multiple model adaptive estimation method in [10]. Similar to the state-feedbac controller, the proposed method contains offline and online steps. First, the stationary control policy for each candidate is computed (e.g. running M value iteration methods) in the offline step. Then, after obtaining stationary control policy µ θ i for all i = 1,..., M, an optimal Bayesian procedure is performed to select the candidate with the largest posterior probability given the latest information up to current time in the online process. The computation maes use of probabilities computed by a ban of BKFs running in parallel, one for each candidate model. This is similar to the multiple model adaptive estimation (MMAE) method for linear systems [31]. Given the prior probability p 0 (θ i ) of candidate model i, for i = 1,..., M, the posterior probability p (θ i ) at time, given the history of observations Y 1 and sequence of control inputs u 0 1, can be obtained as: p (θ i ) = P (θ = θ i Y 1, u 0 1 ) p(y θ = θ i, Y 1 1, u 0 1 ) p 1 (θ i ) = M l=1 p(y θ = θ l, Y 1 1, u 0 1 ) p 1 (θ l ). (12) Furthermore, one can write p(y θ = θ i, Y 1 1, u 0 1 ) = = 2 d p(y X = x j, θ = θ i ) j=1 P (X = x j θ = θ i, Y 1 1, u 0 1 ) 2 d (T θi (Y )) Π θi j=1 jj 1 (j) = T θi (Y ) Π θi 1 1 = β θi 1, (13) with β θi denoting the unnormalized PDV at time, computed at the update step of the BKF tuned to θ i. Combining equations (12) and (13), we obtain the update equation for the candidate model probabilities: β θi p (θ i ) = 1 p 1 (θ i ) M l=1 β θ l, for i = 1,..., M. (14) 1 p 1 (θ l ) Model selection at time can be then accomplished by a maximum a-posteriori criterion: ˆθ = argmax p (θ). (15) θ {θ 1,...,θ M } which leads to the estimate of state at time : ˆX MS ˆX MMAE = ˆX MS (ˆθ ), (16) where (θ) denotes the optimal MMSE state estimate produced by a BKF tuned to the parameter θ. Finally, the control input is the stationary control policy of the selected model in (15) applied on the MMAE estimate of state: u = µ ˆθ ( ˆX MMAE ). (17) A schematic diagram for the multiple model adaptive controller for POBDS is presented in Figure 1. The entire procedure is displayed in Algorithm 2. V. NUMERICAL EXPERIMENT In this section, we conduct a numerical experiment using a Boolean networ for metastatic melanoma [32]. The networ contains 7 genes: WNT5A, pirin, S100P, RET1, MART1, HADHB and STC2. The regulatory relationship for this networ is presented in Table I. For each gene, the output binary string specifies the output value for each value of the input gene(s). For example, the last row of Table I specifies the value of STC2 at current time step from different pairs of (pirin,stc2) values at previous time step 1: (pirin = 0, STC2 = 0) 1 STC2 = 1 (pirin = 0, STC2 = 1) 1 STC2 = 1 (pirin = 1, STC2 = 0) 1 STC2 = 0 (pirin = 1, STC2 = 1) 1 STC2 = 1
4 Offline BKF for Model 1 BKF for Model 2 Posterior Update Offline Step (Stationary Control Policies) Model Selection BKF for Model M System Fig. 1: Schematic diagram of multiple model adaptive controller for POBDS. TABLE I: Boolean functions for the melanoma Boolean networ using output binary string notation (see text). Algorithm 2 Multiple Model Adaptive Controller for POBDS 1: OFFLINE STEP 1) Compute the stationary control policy µ θ i, for all candidates i = 1,..., M by running M parallel value iteration methods. 2: ONLINE STEP 1) Initialization: Set prior distribution of each candidate p 0(θ i), i = 1,..., M. For = 1, 2,..., do: 2) Posterior Update: Using the outputs β θ i 1 of the ban of BKFs, for i = 1,..., M, update the posterior probability of each candidate as: p (θ i) = β θ i 1 p 1(θ i) M l=1 β θ l 1 p, for i = 1,..., M. 1(θ l ) 3) Parameter Estimation: The estimate of parameter θ at time is obtained as: ˆθ = argmax p (θ). θ {θ 1,...,θ M } 4) State Estimation: The MMAE estimate of the state at time is the estimated state by a BKF tuned to the estimated parameter ˆθ : ˆX MMAE = ˆX MS (ˆθ ) 5) Control: The control input is the stationary control policy of the selected model in (15) applied on the MMAE estimate of the state: u = µ ˆθ ( ˆX MMAE ). Genes Input Gene(s) Output WNT5A HADHB 10 pirin prin, RET1,HADHB S100P S100P,RET1,STC RET1 RET1,HADHB,STC MART1 pirin,mart1,stc HADHB pirin,s100p,ret STC2 pirin,stc The goal of control is preventing WNT5A gene to be upregulated. For more information about the biological rationale for this, the reader is referred to [32]. The control input U consists of {0, 1} in which u = 1 refers to flipping the state of RET1 and u = 0 refers no action. The cost function is defined as follows: 6 if WNT5A = 1, and u = 1 g(x j 5 if WNT5A = 1, and u = 0, u) = 1 if WNT5A = 0, and u = 1 0 if WNT5A = 0, and u = 0 for j = 1,..., 2 d. The process noise is assumed to have independent components identically distributed as Bernoulli with a small intensity p, so that all genes are perturbed with a small probability. We assume the Boolean states are observed through a single-lane RNA-sequencing experiment, in which each gene is assumed to be observed independently, so that Y (j) is the read count corresponding to transcript j in the single lane, for j = 1,..., 7. Assuming a negative binomial
5 Average Cost WNT5A Activations model for each count, we have: P (Y (j) = y(j) X (j) = x(j)) = λ j Γ(y(j) + φ j ) y(j)! Γ(φ j ) ( ) λ j + φ j y(j) φ j ( ) λ j + φ j φ j, (18) where Γ denotes the Gamma function, and φ j, λ j > 0 are the real-valued inverse dispersion parameter and mean read count of transcript j, respectively, for j = 1,..., 7. According to the Boolean state model, there are two possible states for the abundance of transcript j: high, if x(j) = 1, and low, if x(j) = 0. Accordingly, we model the parameter λ j in logspace as [9], [26]: log λ j = log s + µ + δ j x(j), (19) where the parameter s is the sequencing depth (which is instrument-dependent), µ > 0 is the baseline level of expression in the inactivated transcriptional state, and δ j > 0 expresses the effect on the observed RNA-seq read count as gene j goes from the inactivated to the activated state, for j = 1,..., 7. The parameters for the simulation are as follows. The discount factor γ is assumed to be The criterion for stopping the value iteration is: exit the iteration when max j=1,...,2 d( J (j) J(j) ) becomes less then 10 12, where J and J are cost values at consecutive iterations. The parameters of observation model are set to be s = , µ = 0.1, φ i = 2, δ i = 2, for i = 1,..., 7. We consider the intensity of process noise p to be unnown, and assume it can tae values 0.01, 0.05, or 0.1, which leads to three possible candidate models. Hence, first the stationary control policy for each of the three candidate models is computed and then three BKFs are run in parallel for the simultaneous estimation and control process (Figure 1). An uniform prior is assumed for all candidates (p 0 (θ i ) = 1/3, i = 1, 2, 3). The actual simulated trajectories are generated assuming p = Figure 2 displays the estimated model for a single trajectory under control of RET1 and without control. It is clear that the correct model is selected in both cases in less than 20 measurements. In addition, the activation status of the WNT5A gene under control of RET1 gene and without control are displayed in Figure 2 for 100 time steps. It can be seen that WNT5A is mostly upregulated in the system without control, which is undesirable, as opposed to the system under control by the proposed multiple model adaptive controller. Figure 3 displays the average cost (over 100 independent runs) under control of RET1 and without control. In all cases, the system started from rest (all genes were inactivated at time 0). It is clear that the system under control by the proposed multiple model adaptive controller has significantly lower cost on average than the system without control. VI. CONCLUSION In this paper, we proposed a multiple model adaptive controller for partially-observed Boolean dynamical systems, when the system is only partially nown. The proposed mod2[1:100] A[oX, 1] Under Control of RET Index Time Index Without Control Fig. 2: Estimated model and activation status of WNT5A with and without control. cost_no[1:100] Under Control of RET Time Index Without Control Fig. 3: Average cost with and without control. method is based on a state-feedbac controller and the multiple model adaptive estimation technique. The application of the proposed method is discussed in the context of the Boolean networ of melanoma observed through RNA-seq measurements. REFERENCES [1] S. A. Kauffman, Metabolic stability and epigenesis in randomly constructed genetic nets, Journal of theoretical biology, vol. 22, no. 3, pp , [2] A. Karbalayghareh, U. Braga-Neto, J. Hua, and E. R. Dougherty, Classification of state trajectories in gene regulatory networs, IEEE/ACM Transactions on Computational Biology and Bioinformatics, [3] A. Roli, M. Manfroni, C. Pinciroli, and M. Birattari, On the design of boolean networ robots, in Applications of Evolutionary Computation, pp , Springer, [4] D. G. Messerschmitt, Synchronization in digital system design, Selected Areas in Communications, IEEE Journal on, vol. 8, no. 8, pp , [5] U. Braga-Neto, Optimal state estimation for boolean dynamical systems, in Signals, Systems and Computers (ASILOMAR), 2011 Conference Record of the Forty Fifth Asilomar Conference on, pp , IEEE, [6] M. Imani and U. Braga-Neto, Optimal state estimation for boolean dynamical systems using a boolean alman smoother, in 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp , IEEE, [7] M. Imani and U. Braga-Neto, Particle filters for partially-observed boolean dynamical systems, arxiv preprint arxiv: , 2017.
6 [8] L. D. McClenny, M. Imani, and U. Braga-Neto, Boolean alman filter with correlated observation noise, in the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2017), IEEE, [9] M. Imani and U. Braga-Neto, Maximum-lielihood adaptive filter for partially-observed boolean dynamical systems, IEEE Transactions on Signal Processing, vol. 65, no. 2, pp , [10] M. Imani and U. Braga-Neto, Optimal gene regulatory networ inference using the boolean alman filter and multiple model adaptive estimation, in th Asilomar Conference on Signals, Systems and Computers, pp , IEEE, [11] A. Bahadorinejad and U. Braga-Neto, Optimal fault detection and diagnosis in transcriptional circuits using next-generation sequencing, IEEE/ACM Transactions on Computational Biology and Bioinformatics, [12] L. D. McClenny, M. Imani, and U. Braga-Neto, Boolfilter pacage vignette, [13] S. F. Ghoreishi and D. L. Allaire, Compositional uncertainty analysis via importance weighted gibbs sampling for coupled multidisciplinary systems, in 18th AIAA Non-Deterministic Approaches Conference, p. 1443, [14] E. Nozari, F. Pasqualetti, and J. Cortes, Time-varying actuator scheduling in complex networs, arxiv preprint arxiv: , [15] S. Z. Dadaneh and X. Qian, Bayesian module identification from multiple noisy networs, EURASIP Journal on Bioinformatics and Systems Biology, vol. 2016, no. 1, p. 1, [16] A. Sarrafi and Z. Mao, Probabilistic uncertainty quantification of wavelet-transform-based structural health monitoring features, in SPIE Smart Structures and Materials+ Nondestructive Evaluation and Health Monitoring, pp N 98051N, International Society for Optics and Photonics, [17] S. F. Ghoreishi, Uncertainty analysis for coupled multidisciplinary systems using sequential importance resampling, Master s thesis, Texas A&M University, [18] A. Sarrafi and Z. Mao, Statistical modeling of wavelet-transformbased features in structural health monitoring, in Model Validation and Uncertainty Quantification, Volume 3, pp , Springer, [19] R. Laye, A. Datta, R. Pal, and E. R. Dougherty, Adaptive intervention in probabilistic boolean networs, Bioinformatics, vol. 25, no. 16, pp , [20] A. Datta, A. Choudhary, M. L. Bittner, and E. R. Dougherty, External control in marovian genetic regulatory networs, Machine learning, vol. 52, no. 1-2, pp , [21] R. Pal, A. Datta, and E. R. Dougherty, Optimal infinite-horizon control for probabilistic boolean networs, Signal Processing, IEEE Transactions on, vol. 54, no. 6, pp , [22] Á. Halász, M. S. Saar, H. Rubin, V. Kumar, G. J. Pappas, et al., Stochastic modeling and control of biological systems: the lactose regulation system of escherichia coli, Automatic Control, IEEE Transactions on, vol. 53, no. Special Issue, pp , [23] I. Shmulevich, E. R. Dougherty, and W. Zhang, From boolean to probabilistic boolean networs as models of genetic regulatory networs, Proceedings of the IEEE, vol. 90, no. 11, pp , [24] E. O. Voit and J. Almeida, Decoupling dynamical systems for pathway identification from metabolic profiles, Bioinformatics, vol. 20, no. 11, pp , [25] N. Friedman, M. Linial, I. Nachman, and D. Pe er, Using bayesian networs to analyze expression data, Journal of computational biology, vol. 7, no. 3-4, pp , [26] M. Imani and U. Braga-Neto, State-feedbac control of partiallyobserved boolean dynamical systems using rna-seq time series data, in American Control Conference (ACC), 2016, pp , IEEE, [27] M. Imani and U. Braga-Neto, Point-based value iteration for partiallyobserved boolean dynamical systems with finite observation space, in Decision and Control (CDC), 2016 IEEE 55th Conference on, pp , IEEE, [28] M. Imani and U. Braga-Neto, Control of gene regulatory networs with noisy measurements and uncertain inputs, arxiv preprint arxiv: , [29] S. Friedman, S. F. Ghoreishi, and D. L. Allaire, Quantifying the impact of different model discrepancy formulations in coupled multidisciplinary systems, in 19th AIAA Non-Deterministic Approaches Conference, p. 1950, [30] D. P. Bertseas, Dynamic programming and optimal control, vol. 1. Athena Scientific Belmont, MA, [31] P. S. Maybec and P. D. Hanlon, Performance enhancement of a multiple model adaptive estimator, Aerospace and Electronic Systems, IEEE Transactions on, vol. 31, no. 4, pp , [32] E. R. Dougherty, R. Pal, X. Qian, M. L. Bittner, and A. Datta, Stationary and structural control in gene regulatory networs: basic concepts, International Journal of Systems Science, vol. 41, no. 1, pp. 5 16, 2010.
State-Feedback Control of Partially-Observed Boolean Dynamical Systems Using RNA-Seq Time Series Data
State-Feedback Control of Partially-Observed Boolean Dynamical Systems Using RNA-Seq Time Series Data Mahdi Imani and Ulisses Braga-Neto Department of Electrical and Computer Engineering Texas A&M University
More informationOptimal State Estimation for Boolean Dynamical Systems using a Boolean Kalman Smoother
Optimal State Estimation for Boolean Dynamical Systems using a Boolean Kalman Smoother Mahdi Imani and Ulisses Braga-Neto Department of Electrical and Computer Engineering Texas A&M University College
More informationCompletion Time of Fuzzy GERT-type Networks with Loops
Completion Time of Fuzzy GERT-type Networks with Loops Sina Ghaffari 1, Seyed Saeid Hashemin 2 1 Department of Industrial Engineering, Ardabil Branch, Islamic Azad university, Ardabil, Iran 2 Department
More informationBIOINFORMATICS. Adaptive Intervention in Probabilistic Boolean Networks. Ritwik Layek 1, Aniruddha Datta 1, Ranadip Pal 2, Edward R.
BIOINFORMATICS Vol. 00 no. 00 Pages 1 8 Adaptive Intervention in Probabilistic Boolean Networks Ritwik Layek 1, Aniruddha Datta 1, Ranadip Pal 2, Edward R. Dougherty 1,3 1 Texas A & M University, Electrical
More informationBayesian Control of Large MDPs with Unknown Dynamics in Data-Poor Environments
Bayesian Control of Large MDPs with Unknown Dynamics in Data-Poor Environments Mahdi Imani Texas A&M University College Station, TX, USA m.imani88@tamu.edu Seyede Fatemeh Ghoreishi Texas A&M University
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2016
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2016 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationExternal Control in Markovian Genetic Regulatory Networks: The Imperfect Information Case
Bioinformatics Advance Access published January 29, 24 External Control in Markovian Genetic Regulatory Networks: The Imperfect Information Case Aniruddha Datta Ashish Choudhary Michael L. Bittner Edward
More informationBayesian Networks: Construction, Inference, Learning and Causal Interpretation. Volker Tresp Summer 2014
Bayesian Networks: Construction, Inference, Learning and Causal Interpretation Volker Tresp Summer 2014 1 Introduction So far we were mostly concerned with supervised learning: we predicted one or several
More informationState Estimation and Prediction in a Class of Stochastic Hybrid Systems
State Estimation and Prediction in a Class of Stochastic Hybrid Systems Eugenio Cinquemani Mario Micheli Giorgio Picci Dipartimento di Ingegneria dell Informazione, Università di Padova, Padova, Italy
More informationIEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 2, FEBRUARY X/$ IEEE
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 58, NO 2, FEBRUARY 2010 879 Context-Sensitive Probabilistic Boolean Networks: Steady-State Properties, Reduction, and Steady-State Approximation Ranadip Pal,
More informationLecture 8: Bayesian Estimation of Parameters in State Space Models
in State Space Models March 30, 2016 Contents 1 Bayesian estimation of parameters in state space models 2 Computational methods for parameter estimation 3 Practical parameter estimation in state space
More informationAlgorithmisches Lernen/Machine Learning
Algorithmisches Lernen/Machine Learning Part 1: Stefan Wermter Introduction Connectionist Learning (e.g. Neural Networks) Decision-Trees, Genetic Algorithms Part 2: Norman Hendrich Support-Vector Machines
More informationGLOBEX Bioinformatics (Summer 2015) Genetic networks and gene expression data
GLOBEX Bioinformatics (Summer 2015) Genetic networks and gene expression data 1 Gene Networks Definition: A gene network is a set of molecular components, such as genes and proteins, and interactions between
More informationThe Monte Carlo Method: Bayesian Networks
The Method: Bayesian Networks Dieter W. Heermann Methods 2009 Dieter W. Heermann ( Methods)The Method: Bayesian Networks 2009 1 / 18 Outline 1 Bayesian Networks 2 Gene Expression Data 3 Bayesian Networks
More informationRecursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations
PREPRINT 1 Recursive Noise Adaptive Kalman Filtering by Variational Bayesian Approximations Simo Särä, Member, IEEE and Aapo Nummenmaa Abstract This article considers the application of variational Bayesian
More informationMachine Learning in Simple Networks. Lars Kai Hansen
Machine Learning in Simple Networs Lars Kai Hansen www.imm.dtu.d/~lh Outline Communities and lin prediction Modularity Modularity as a combinatorial optimization problem Gibbs sampling Detection threshold
More informationNON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES
2013 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING NON-LINEAR NOISE ADAPTIVE KALMAN FILTERING VIA VARIATIONAL BAYES Simo Särä Aalto University, 02150 Espoo, Finland Jouni Hartiainen
More informationTemporal-Difference Q-learning in Active Fault Diagnosis
Temporal-Difference Q-learning in Active Fault Diagnosis Jan Škach 1 Ivo Punčochář 1 Frank L. Lewis 2 1 Identification and Decision Making Research Group (IDM) European Centre of Excellence - NTIS University
More informationExpectation Propagation in Dynamical Systems
Expectation Propagation in Dynamical Systems Marc Peter Deisenroth Joint Work with Shakir Mohamed (UBC) August 10, 2012 Marc Deisenroth (TU Darmstadt) EP in Dynamical Systems 1 Motivation Figure : Complex
More informationF denotes cumulative density. denotes probability density function; (.)
BAYESIAN ANALYSIS: FOREWORDS Notation. System means the real thing and a model is an assumed mathematical form for the system.. he probability model class M contains the set of the all admissible models
More informationIndex. FOURTH PROOFS n98-book 2009/11/4 page 261
FOURTH PROOFS n98-book 2009/11/4 page 261 Index activity, 10, 13 adaptive control, 215 adaptive controller, 215 adaptive immune system, 20 adaptive intervention strategy, 216 adjacency rule, 111 admissible
More informationComputational Genomics. Systems biology. Putting it together: Data integration using graphical models
02-710 Computational Genomics Systems biology Putting it together: Data integration using graphical models High throughput data So far in this class we discussed several different types of high throughput
More informationGaussian Mixture Model
Case Study : Document Retrieval MAP EM, Latent Dirichlet Allocation, Gibbs Sampling Machine Learning/Statistics for Big Data CSE599C/STAT59, University of Washington Emily Fox 0 Emily Fox February 5 th,
More informationRAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS
RAO-BLACKWELLISED PARTICLE FILTERS: EXAMPLES OF APPLICATIONS Frédéric Mustière e-mail: mustiere@site.uottawa.ca Miodrag Bolić e-mail: mbolic@site.uottawa.ca Martin Bouchard e-mail: bouchard@site.uottawa.ca
More informationHand Written Digit Recognition using Kalman Filter
International Journal of Electronics and Communication Engineering. ISSN 0974-2166 Volume 5, Number 4 (2012), pp. 425-434 International Research Publication House http://www.irphouse.com Hand Written Digit
More informationBAYESIAN ESTIMATION OF UNKNOWN PARAMETERS OVER NETWORKS
BAYESIAN ESTIMATION OF UNKNOWN PARAMETERS OVER NETWORKS Petar M. Djurić Dept. of Electrical & Computer Engineering Stony Brook University Stony Brook, NY 11794, USA e-mail: petar.djuric@stonybrook.edu
More informationParameterized Joint Densities with Gaussian Mixture Marginals and their Potential Use in Nonlinear Robust Estimation
Proceedings of the 2006 IEEE International Conference on Control Applications Munich, Germany, October 4-6, 2006 WeA0. Parameterized Joint Densities with Gaussian Mixture Marginals and their Potential
More informationDS-GA 1002 Lecture notes 11 Fall Bayesian statistics
DS-GA 100 Lecture notes 11 Fall 016 Bayesian statistics In the frequentist paradigm we model the data as realizations from a distribution that depends on deterministic parameters. In contrast, in Bayesian
More informationCourse 495: Advanced Statistical Machine Learning/Pattern Recognition
Course 495: Advanced Statistical Machine Learning/Pattern Recognition Lecturer: Stefanos Zafeiriou Goal (Lectures): To present discrete and continuous valued probabilistic linear dynamical systems (HMMs
More informationIntroduction to Bioinformatics
Systems biology Introduction to Bioinformatics Systems biology: modeling biological p Study of whole biological systems p Wholeness : Organization of dynamic interactions Different behaviour of the individual
More informationIntroduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak
Introduction to Systems Analysis and Decision Making Prepared by: Jakub Tomczak 1 Introduction. Random variables During the course we are interested in reasoning about considered phenomenon. In other words,
More informationParameter Estimation
1 / 44 Parameter Estimation Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay October 25, 2012 Motivation System Model used to Derive
More informationElements of Reinforcement Learning
Elements of Reinforcement Learning Policy: way learning algorithm behaves (mapping from state to action) Reward function: Mapping of state action pair to reward or cost Value function: long term reward,
More informationOptimal Perturbation Control of General Topology Molecular Networks
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 61, NO. 7, APRIL 1, 2013 1733 Optimal Perturbation Control of General Topology Molecular Networks Nidhal Bouaynaya, Member, IEEE, Roman Shterenberg, and Dan
More informationLecture 4: Probabilistic Learning. Estimation Theory. Classification with Probability Distributions
DD2431 Autumn, 2014 1 2 3 Classification with Probability Distributions Estimation Theory Classification in the last lecture we assumed we new: P(y) Prior P(x y) Lielihood x2 x features y {ω 1,..., ω K
More informationExpectation propagation for signal detection in flat-fading channels
Expectation propagation for signal detection in flat-fading channels Yuan Qi MIT Media Lab Cambridge, MA, 02139 USA yuanqi@media.mit.edu Thomas Minka CMU Statistics Department Pittsburgh, PA 15213 USA
More informationSimultaneous state and input estimation with partial information on the inputs
Loughborough University Institutional Repository Simultaneous state and input estimation with partial information on the inputs This item was submitted to Loughborough University's Institutional Repository
More informationSYSTEMS MEDICINE: AN INTEGRATED APPROACH WITH DECISION MAKING PERSPECTIVE. A Dissertation BABAK FARYABI
SYSTEMS MEDICINE: AN INTEGRATED APPROACH WITH DECISION MAKING PERSPECTIVE A Dissertation by BABAK FARYABI Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the
More informationMachine Learning, Fall 2012 Homework 2
0-60 Machine Learning, Fall 202 Homework 2 Instructors: Tom Mitchell, Ziv Bar-Joseph TA in charge: Selen Uguroglu email: sugurogl@cs.cmu.edu SOLUTIONS Naive Bayes, 20 points Problem. Basic concepts, 0
More informationHuman-Oriented Robotics. Temporal Reasoning. Kai Arras Social Robotics Lab, University of Freiburg
Temporal Reasoning Kai Arras, University of Freiburg 1 Temporal Reasoning Contents Introduction Temporal Reasoning Hidden Markov Models Linear Dynamical Systems (LDS) Kalman Filter 2 Temporal Reasoning
More informationIntroduction Probabilistic Programming ProPPA Inference Results Conclusions. Embedding Machine Learning in Stochastic Process Algebra.
Embedding Machine Learning in Stochastic Process Algebra Jane Hillston Joint work with Anastasis Georgoulas and Guido Sanguinetti, School of Informatics, University of Edinburgh 16th August 2017 quan col....
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Uncertainty & Probabilities & Bandits Daniel Hennes 16.11.2017 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Uncertainty Probability
More informationReinforcement Learning with Reference Tracking Control in Continuous State Spaces
Reinforcement Learning with Reference Tracking Control in Continuous State Spaces Joseph Hall, Carl Edward Rasmussen and Jan Maciejowski Abstract The contribution described in this paper is an algorithm
More informationGenetic Networks. Korbinian Strimmer. Seminar: Statistical Analysis of RNA-Seq Data 19 June IMISE, Universität Leipzig
Genetic Networks Korbinian Strimmer IMISE, Universität Leipzig Seminar: Statistical Analysis of RNA-Seq Data 19 June 2012 Korbinian Strimmer, RNA-Seq Networks, 19/6/2012 1 Paper G. I. Allen and Z. Liu.
More informationThe Regularized EM Algorithm
The Regularized EM Algorithm Haifeng Li Department of Computer Science University of California Riverside, CA 92521 hli@cs.ucr.edu Keshu Zhang Human Interaction Research Lab Motorola, Inc. Tempe, AZ 85282
More informationAsynchronous Stochastic Boolean Networks as Gene Network Models
Journal of Computational Biology Journal of Computational Biology: http://mc.manuscriptcentral.com/liebert/jcb Asynchronous Stochastic Boolean Networks as Gene Network Models Journal: Journal of Computational
More informationIntroduction to Artificial Intelligence (AI)
Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 10 Oct, 13, 2011 CPSC 502, Lecture 10 Slide 1 Today Oct 13 Inference in HMMs More on Robot Localization CPSC 502, Lecture
More informationHOPFIELD neural networks (HNNs) are a class of nonlinear
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 4, APRIL 2005 213 Stochastic Noise Process Enhancement of Hopfield Neural Networks Vladimir Pavlović, Member, IEEE, Dan Schonfeld,
More informationRecent Advances in SPSA at the Extremes: Adaptive Methods for Smooth Problems and Discrete Methods for Non-Smooth Problems
Recent Advances in SPSA at the Extremes: Adaptive Methods for Smooth Problems and Discrete Methods for Non-Smooth Problems SGM2014: Stochastic Gradient Methods IPAM, February 24 28, 2014 James C. Spall
More informationNetworks in systems biology
Networks in systems biology Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 4500, Spring 2017 M. Macauley (Clemson) Networks in systems
More information10 Robotic Exploration and Information Gathering
NAVARCH/EECS 568, ROB 530 - Winter 2018 10 Robotic Exploration and Information Gathering Maani Ghaffari April 2, 2018 Robotic Information Gathering: Exploration and Monitoring In information gathering
More informationarxiv: v1 [cs.sy] 25 Oct 2017
Reconstruct the Logical Network from the Transition Matrix Cailu Wang, Yuegang Tao School of Control Science and Engineering, Hebei University of Technology, Tianjin, 300130, P. R. China arxiv:1710.09681v1
More informationA FEASIBILITY STUDY OF PARTICLE FILTERS FOR MOBILE STATION RECEIVERS. Michael Lunglmayr, Martin Krueger, Mario Huemer
A FEASIBILITY STUDY OF PARTICLE FILTERS FOR MOBILE STATION RECEIVERS Michael Lunglmayr, Martin Krueger, Mario Huemer Michael Lunglmayr and Martin Krueger are with Infineon Technologies AG, Munich email:
More informationLinear stochastic approximation driven by slowly varying Markov chains
Available online at www.sciencedirect.com Systems & Control Letters 50 2003 95 102 www.elsevier.com/locate/sysconle Linear stochastic approximation driven by slowly varying Marov chains Viay R. Konda,
More informationEM-algorithm for Training of State-space Models with Application to Time Series Prediction
EM-algorithm for Training of State-space Models with Application to Time Series Prediction Elia Liitiäinen, Nima Reyhani and Amaury Lendasse Helsinki University of Technology - Neural Networks Research
More informationDecision-making, inference, and learning theory. ECE 830 & CS 761, Spring 2016
Decision-making, inference, and learning theory ECE 830 & CS 761, Spring 2016 1 / 22 What do we have here? Given measurements or observations of some physical process, we ask the simple question what do
More informationCS Lecture 18. Expectation Maximization
CS 6347 Lecture 18 Expectation Maximization Unobserved Variables Latent or hidden variables in the model are never observed We may or may not be interested in their values, but their existence is crucial
More informationA graph contains a set of nodes (vertices) connected by links (edges or arcs)
BOLTZMANN MACHINES Generative Models Graphical Models A graph contains a set of nodes (vertices) connected by links (edges or arcs) In a probabilistic graphical model, each node represents a random variable,
More informationWhy do we care? Measurements. Handling uncertainty over time: predicting, estimating, recognizing, learning. Dealing with time
Handling uncertainty over time: predicting, estimating, recognizing, learning Chris Atkeson 2004 Why do we care? Speech recognition makes use of dependence of words and phonemes across time. Knowing where
More informationLearning Gaussian Process Models from Uncertain Data
Learning Gaussian Process Models from Uncertain Data Patrick Dallaire, Camille Besse, and Brahim Chaib-draa DAMAS Laboratory, Computer Science & Software Engineering Department, Laval University, Canada
More informationBayesian Methods for Machine Learning
Bayesian Methods for Machine Learning CS 584: Big Data Analytics Material adapted from Radford Neal s tutorial (http://ftp.cs.utoronto.ca/pub/radford/bayes-tut.pdf), Zoubin Ghahramni (http://hunch.net/~coms-4771/zoubin_ghahramani_bayesian_learning.pdf),
More informationLearning in Bayesian Networks
Learning in Bayesian Networks Florian Markowetz Max-Planck-Institute for Molecular Genetics Computational Molecular Biology Berlin Berlin: 20.06.2002 1 Overview 1. Bayesian Networks Stochastic Networks
More informationBasic modeling approaches for biological systems. Mahesh Bule
Basic modeling approaches for biological systems Mahesh Bule The hierarchy of life from atoms to living organisms Modeling biological processes often requires accounting for action and feedback involving
More informationTime Series Prediction by Kalman Smoother with Cross-Validated Noise Density
Time Series Prediction by Kalman Smoother with Cross-Validated Noise Density Simo Särkkä E-mail: simo.sarkka@hut.fi Aki Vehtari E-mail: aki.vehtari@hut.fi Jouko Lampinen E-mail: jouko.lampinen@hut.fi Abstract
More informationBayesian Networks BY: MOHAMAD ALSABBAGH
Bayesian Networks BY: MOHAMAD ALSABBAGH Outlines Introduction Bayes Rule Bayesian Networks (BN) Representation Size of a Bayesian Network Inference via BN BN Learning Dynamic BN Introduction Conditional
More informationCompressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach
Compressive Sensing under Matrix Uncertainties: An Approximate Message Passing Approach Asilomar 2011 Jason T. Parker (AFRL/RYAP) Philip Schniter (OSU) Volkan Cevher (EPFL) Problem Statement Traditional
More informationSupplementary Information for cryosparc: Algorithms for rapid unsupervised cryo-em structure determination
Supplementary Information for cryosparc: Algorithms for rapid unsupervised cryo-em structure determination Supplementary Note : Stochastic Gradient Descent (SGD) SGD iteratively optimizes an objective
More informationBlind phase/frequency synchronization with low-precision ADC: a Bayesian approach
Blind phase/frequency synchronization with low-precision ADC: a Bayesian approach Aseem Wadhwa, Upamanyu Madhow Department of ECE, UCSB 1/26 Modern Communication Receiver Architecture Analog Digital TX
More informationCONTROL OF STATIONARY BEHAVIOR IN PROBABILISTIC BOOLEAN NETWORKS BY MEANS OF STRUCTURAL INTERVENTION
Journal of Biological Systems, Vol. 0, No. 4 (2002) 43 445 c World Scientific Publishing Company CONTROL OF STATIONARY BEHAVIOR IN PROBABILISTIC BOOLEAN NETWORKS BY MEANS OF STRUCTURAL INTERVENTION ILYA
More informationCSEP 573: Artificial Intelligence
CSEP 573: Artificial Intelligence Hidden Markov Models Luke Zettlemoyer Many slides over the course adapted from either Dan Klein, Stuart Russell, Andrew Moore, Ali Farhadi, or Dan Weld 1 Outline Probabilistic
More informationProbabilistic reconstruction of the tumor progression process in gene regulatory networks in the presence of uncertainty
Probabilistic reconstruction of the tumor progression process in gene regulatory networks in the presence of uncertainty Mohammad Shahrokh Esfahani, Byung-Jun Yoon, Edward R. Dougherty,2 Department of
More informationPartially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS
Partially Observable Markov Decision Processes (POMDPs) Pieter Abbeel UC Berkeley EECS Many slides adapted from Jur van den Berg Outline POMDPs Separation Principle / Certainty Equivalence Locally Optimal
More informationEVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER
EVALUATING SYMMETRIC INFORMATION GAP BETWEEN DYNAMICAL SYSTEMS USING PARTICLE FILTER Zhen Zhen 1, Jun Young Lee 2, and Abdus Saboor 3 1 Mingde College, Guizhou University, China zhenz2000@21cn.com 2 Department
More informationChannel Probing in Communication Systems: Myopic Policies Are Not Always Optimal
Channel Probing in Communication Systems: Myopic Policies Are Not Always Optimal Matthew Johnston, Eytan Modiano Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge,
More informationAsynchronous random Boolean network model based on elementary cellular automata
Asynchronous random Boolean networ model based on elementary cellular automata Mihaela T. Matache* Jac Heidel Department of Mathematics University of Nebrasa at Omaha Omaha, NE 6882-243, USA *dmatache@mail.unomaha.edu
More informationBagging During Markov Chain Monte Carlo for Smoother Predictions
Bagging During Markov Chain Monte Carlo for Smoother Predictions Herbert K. H. Lee University of California, Santa Cruz Abstract: Making good predictions from noisy data is a challenging problem. Methods
More informationCase Studies of Logical Computation on Stochastic Bit Streams
Case Studies of Logical Computation on Stochastic Bit Streams Peng Li 1, Weikang Qian 2, David J. Lilja 1, Kia Bazargan 1, and Marc D. Riedel 1 1 Electrical and Computer Engineering, University of Minnesota,
More informationOptimal path planning using Cross-Entropy method
Optimal path planning using Cross-Entropy method F Celeste, FDambreville CEP/Dept of Geomatics Imagery Perception 9 Arcueil France {francisceleste, fredericdambreville}@etcafr J-P Le Cadre IRISA/CNRS Campus
More informationHOMEWORK #4: LOGISTIC REGRESSION
HOMEWORK #4: LOGISTIC REGRESSION Probabilistic Learning: Theory and Algorithms CS 274A, Winter 2019 Due: 11am Monday, February 25th, 2019 Submit scan of plots/written responses to Gradebook; submit your
More informationin a Rao-Blackwellised Unscented Kalman Filter
A Rao-Blacwellised Unscented Kalman Filter Mar Briers QinetiQ Ltd. Malvern Technology Centre Malvern, UK. m.briers@signal.qinetiq.com Simon R. Masell QinetiQ Ltd. Malvern Technology Centre Malvern, UK.
More information5.3 METABOLIC NETWORKS 193. P (x i P a (x i )) (5.30) i=1
5.3 METABOLIC NETWORKS 193 5.3 Metabolic Networks 5.4 Bayesian Networks Let G = (V, E) be a directed acyclic graph. We assume that the vertices i V (1 i n) represent for example genes and correspond to
More informationIntroduction to Bayesian Learning. Machine Learning Fall 2018
Introduction to Bayesian Learning Machine Learning Fall 2018 1 What we have seen so far What does it mean to learn? Mistake-driven learning Learning by counting (and bounding) number of mistakes PAC learnability
More informationLEARNING DYNAMIC SYSTEMS: MARKOV MODELS
LEARNING DYNAMIC SYSTEMS: MARKOV MODELS Markov Process and Markov Chains Hidden Markov Models Kalman Filters Types of dynamic systems Problem of future state prediction Predictability Observability Easily
More informationAsynchronous Non-Convex Optimization For Separable Problem
Asynchronous Non-Convex Optimization For Separable Problem Sandeep Kumar and Ketan Rajawat Dept. of Electrical Engineering, IIT Kanpur Uttar Pradesh, India Distributed Optimization A general multi-agent
More informationMini-Course 07 Kalman Particle Filters. Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra
Mini-Course 07 Kalman Particle Filters Henrique Massard da Fonseca Cesar Cunha Pacheco Wellington Bettencurte Julio Dutra Agenda State Estimation Problems & Kalman Filter Henrique Massard Steady State
More informationApproximate Bayesian Computation and Particle Filters
Approximate Bayesian Computation and Particle Filters Dennis Prangle Reading University 5th February 2014 Introduction Talk is mostly a literature review A few comments on my own ongoing research See Jasra
More informationGaussian Processes (10/16/13)
STA561: Probabilistic machine learning Gaussian Processes (10/16/13) Lecturer: Barbara Engelhardt Scribes: Changwei Hu, Di Jin, Mengdi Wang 1 Introduction In supervised learning, we observe some inputs
More informationLogic-Based Modeling in Systems Biology
Logic-Based Modeling in Systems Biology Alexander Bockmayr LPNMR 09, Potsdam, 16 September 2009 DFG Research Center Matheon Mathematics for key technologies Outline A.Bockmayr, FU Berlin/Matheon 2 I. Systems
More informationBayesian Learning (II)
Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Bayesian Learning (II) Niels Landwehr Overview Probabilities, expected values, variance Basic concepts of Bayesian learning MAP
More informationProcedia Computer Science 00 (2011) 000 6
Procedia Computer Science (211) 6 Procedia Computer Science Complex Adaptive Systems, Volume 1 Cihan H. Dagli, Editor in Chief Conference Organized by Missouri University of Science and Technology 211-
More informationSparse Bayesian Logistic Regression with Hierarchical Prior and Variational Inference
Sparse Bayesian Logistic Regression with Hierarchical Prior and Variational Inference Shunsuke Horii Waseda University s.horii@aoni.waseda.jp Abstract In this paper, we present a hierarchical model which
More informationLearning Tetris. 1 Tetris. February 3, 2009
Learning Tetris Matt Zucker Andrew Maas February 3, 2009 1 Tetris The Tetris game has been used as a benchmark for Machine Learning tasks because its large state space (over 2 200 cell configurations are
More informationThe Kalman Filter ImPr Talk
The Kalman Filter ImPr Talk Ged Ridgway Centre for Medical Image Computing November, 2006 Outline What is the Kalman Filter? State Space Models Kalman Filter Overview Bayesian Updating of Estimates Kalman
More informationCombine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning
Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning Michalis K. Titsias Department of Informatics Athens University of Economics and Business
More informationFinite-Horizon Optimal State-Feedback Control of Nonlinear Stochastic Systems Based on a Minimum Principle
Finite-Horizon Optimal State-Feedbac Control of Nonlinear Stochastic Systems Based on a Minimum Principle Marc P Deisenroth, Toshiyui Ohtsua, Florian Weissel, Dietrich Brunn, and Uwe D Hanebec Abstract
More informationAutomated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling
Automated Segmentation of Low Light Level Imagery using Poisson MAP- MRF Labelling Abstract An automated unsupervised technique, based upon a Bayesian framework, for the segmentation of low light level
More informationQ-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming
MATHEMATICS OF OPERATIONS RESEARCH Vol. 37, No. 1, February 2012, pp. 66 94 ISSN 0364-765X (print) ISSN 1526-5471 (online) http://dx.doi.org/10.1287/moor.1110.0532 2012 INFORMS Q-Learning and Enhanced
More informationBayesian statistics. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda
Bayesian statistics DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall15 Carlos Fernandez-Granda Frequentist vs Bayesian statistics In frequentist statistics
More informationMarkov Decision Processes Chapter 17. Mausam
Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.
More informationThe Origin of Deep Learning. Lili Mou Jan, 2015
The Origin of Deep Learning Lili Mou Jan, 2015 Acknowledgment Most of the materials come from G. E. Hinton s online course. Outline Introduction Preliminary Boltzmann Machines and RBMs Deep Belief Nets
More information