Allocation of Resources in CB Defense: Optimization and Ranking. NMSU: J. Cowie, H. Dang, B. Li Hung T. Nguyen UNM: F. Gilfeather Oct 26, 2005

Size: px
Start display at page:

Download "Allocation of Resources in CB Defense: Optimization and Ranking. NMSU: J. Cowie, H. Dang, B. Li Hung T. Nguyen UNM: F. Gilfeather Oct 26, 2005"

Transcription

1 Allocation of Resources in CB Defense: Optimization and Raning by NMSU: J. Cowie, H. Dang, B. Li Hung T. Nguyen UNM: F. Gilfeather Oct 26, 2005

2 Outline Problem Formulation Architecture of Decision System Optimization Methods Raning Procedures Conclusions and Future Wor

3 Problem Formulation Given $M, let θ=(θ,,θ ) be the mitigating variable of some asset, find optimal allocations (x,,x ) to (θ,,θ ) to minimize consequences c=(c,,c ) of an CB attac to the asset, and ran these allocations according to various possible preferences of decision-maers. $M x x 2 x θ θ 2 θ i= x i = $ M

4 Problem Formulation Example: Asset: An airbase Money: $M=million Defense Measures: Consequences: θ : chemical agent detector θ 2 : biological agent detector θ 3 : perimeter protection θ 4 : trained onsite personnel θ 5 : chemical prophylaxis θ 6 : biological prophylaxis θ 7 : medical treatment c : number of casualties c 2 :cost of remediation c 3 :number of days of operation disruption c 4 :negative geo-political impacts

5 Problem Formulation Optimal Allocations We need to formulate an objective function. ϕ : Ω Ψ Ω = = { c Ψ, where {( x =, L ( c Ω is the space of allocations, Ψ is the space of consequences., x, L ) :, c m i = )} x i The optimization problem is Min ϕ(x,,x ) subject to = R M m }, which is an optimization problem with multiple objectives. In principle, the problem can be solved by standard techniques using decision-maers preferences and value trade offs. R ( x, L, x ) Ω

6 Problem Formulation How to obtain the objective function ϕ: R R m? X x, L, x ) θ = ( θ, L, θ ) c( θ ) = ( c ( θ ), L, c ( θ )) = ( m The relation between X=(x,,x ) and the consequence ϕ(x) is: So, a) c(θ): the consequence c is a function of θ. (Data/Scenarios from experts). b) θ(x): is a function of X. (Cost model). ϕ( X) = coθ( X)

7 Problem Formulation Example : An Example of Objective Function c i ( X ) p = α { c s= 0 i ( s) L( s) Π[ e( s, θ ( X ))]} j= j Where: α is a normalization constant, *p is the number of scenarios, *c 0 i (s) initial consequence before improvements for the sth scenario, *L(s) is a probability (sums to ), *θ j (X) is the number of detectors of the i th ind, it is a function of X. *e(s, θ j (X) ) is the effectivity of the i th ind of defense measure on the scenario s.

8 Minimize subject to f Problem Formulation 5 6 s x (,,, ) 0 j x x 2 L x6 = [ sin( π )] 5 x 6 i = x i = 00, x i s = j = Using simulated annealing, we get: 0. ( x, x 2, L, x6 ) Minimum ( ) ( ) ( ) ( ) ( ) ( ) ( )

9 Problem Formulation Histogram for optimal allocations (X=$00) $ to unit $ to unit2 $ to unit3 $ to unit4 $ to unit5 $ to unit6 0 alloc alloc2 alloc3 alloc4 alloc5

10 Problem Formulation Component-wise optimization is not a proper solution for our multiple objective problem. Example 2: Minimize ϕ(x,,x ) over a constrained set A with predetermined acceptable consequences levels. A = {( x, L, x ): xi = M and ϕ j( x, L, x j i= ) cˆ }

11 Architecture INPUT - Data from scenario tree: mitigating variables θ's, lielihood, consequences - Total budget $X - Acceptable consequences C ) - Cost model - Utility function C=ϕ(X) - Human interface: questionnaire ANALYTIC TOOLBOX OPTIMIZATION Minimizes utility function C=ϕ(x) subject to: x and X ) C C RANKING Rans allocations using Choquet Integral, AHP, Multi Objective Programming OUTPUT * x * x2 * x 3 M * x * c * c2 M * c m

12 Optimization. Optimization of the problem in Example 2. Minimize total cost M = w i θ i subject to i= for j=,,m, where c j i= e ij θ i cˆ j * is the number of defense measures (mitigating variables) *w i is the cost of improving θ i by one unit * θ i is the improvement in defense measure facilities θ i *c j is the j th consequence component in the initial variant (data) *e ij is the decrease in the result c j of a scenario attac if we increase θ i by one *m is the number of components of a consequence vector after an attac

13 Optimization Let We want to find θ i s such that is minimized. and ˆ = [200,50,29,3], c [3,5,7,27,6], ],..., [ 5 = w w = ) ( ij e = = i i w i M θ = [000,00,60,20], c = θ θ θ θ θ θ

14 Optimization 0 options of ( θ,..., θ 5) with their corresponding amount of money. ( θ,..., θ 5) =(4,,,,), with M=97; ( θ,..., θ 5) =(,2,,,2), with M=99; ( θ θ =(3,2,,,), with M=99; ( θ,..., θ 5) =(9,,,,4), with M=00; ( θ,..., θ 5) =(,,,,3), with M=00; ( θ,..., θ 5) =(3,,,,2), with M=00; ( θ,..., θ 5) =(5,,,,), with M=00; ( θ θ =(0,3,,,2), with M=0; ( θ,..., θ 5) =(2,3,,,), with M=0; θ,..., ) =(6,2,,,5), with M=02; ( θ5

15 Optimization Histogram for the options #of detector #of detector2 #of detector3 #of detector4 #of detector5 0 opt opt2 opt3 opt4 opt5 opt6

16 Optimization 2. Optimization of vector-valued utility function using decision-maers preferences. To optimize ϕ(x)=(c (X),,c m (X)) T where X=(x,,x ), we can use the common method of weighted linear combination of the components c j (X), i.e., optimize j = where w j is the scaling weight for c j. m w j c j ( x)

17 Optimization How to obtain the w j s? Example: Let c (X)=repair cost ($), c 2 (X)=human casualties. The overall utility function is expressed in $. So w =. Decision maers will be ased to express their preferences among consequences leading to the identification of w 2 (Keeney and Raiffa, 993).

18 The weighting method

19 Optimization 3. Some optimization methods a) Genetic algorithms This "evolutionary" type of optimization method is appropriate for non-smooth objective functions. The method is inspired from the reproduction process in biology. This method is designed to optimize an objective function f for which we do not now its analytic expression but, given input θ =(θ, θ 2,..., θ ), the value f(θ) can be found.

20 Optimization b). Stochastic approximation (Robbins and Monro, 95) This method is designed to optimize an unnown function f(θ) when, for specified θ, the value f(θ) can be provided. This can be done by asing experts from MIIS. p Problem: Find a minimum point, θ* R, of a realvalued function f(θ), called the "loss function," that is observed in the presence of noise.

21 Optimization () Finite Difference: The iterative procedure is ˆ + θ = ˆ θ a gˆ ( ˆ θ ) gˆ i ( ˆ θ ) = y( ˆ θ + c a : goes to 0 at a rate neither too fast nor too slow, e i ) 2c y( ˆ θ y(θ): the observation of f(θ), e i : a vector with a in the ith place, and 0 elsewhere, c : goes to 0 at a rate neither too fast nor too slow. Initialize ˆ θ, calculate gˆ ˆ ( θ) and θˆ,, continue this process 2 till θˆ converges to *, which is our optimization solution. θˆ c e i )

22 Optimization = X = C ~ N (0,) ε and Optimize f(θ)=(xθ-c) (Xθ-C)+ε with Finite Difference procedure,we get the result as follows: Example: Let =[ ] = θ θ θ θ θ θ θ θˆ ] [.667 It is close to the result of the case where the observations are not influenced by noise:

23 Optimization.. (2) Simultaneous Perturbation Stochastic Approximation (SPSA) with Injected Noise (Marya and Chin, 200):to obtain global minimum. ˆ = ˆ θ a g ˆ + ˆ ( θ ) ˆ( θ) = (2c ) [ y( θ + c g θ + q ϖ ) y( θ c )] a,c,q :goes to 0 at a rate neither too fast nor too slow; y(.): the observation of f(.); : distributed as Bernoulli (±); w : i.i.d. in N (0,).

24 Optimization of mocup math model using simulated annealing

25 Raning Ran solutions X = (X, X 2 X ) obtained via optimization to find the most "efficient" one (Multicriteria decision maing, with the mitigating variables θ,θ 2...θ as criteria ). Goal: define a total order between alternatives, i.e. define a map ϕ:r R, so that alternative X is preferred to alternative Y if ϕ(y ) ϕ(x). The total order should reflect the degrees of importance of each criterion θ i in its contribution to the "total score" ϕ(x).

26 Raning In our raning problem, the criteria are interactive, e.g. mitigating variables can contribute to damage reduction in combinations. Non-linear aggregation operators are proved more efficient in such situation than linear ones but may be computationally prohibitive. Approximation can be done with a linear aggregation operator based on simple analysis of interaction between criteria.

27 Raning. Raning with AHP Linear aggregation operator based on simple analysis of interaction between criteria, namely, pair-wise comparisons. Uses fuzzy logic to extract degrees of importance between pairs of criteria, and conducts a synthesis of priorities leading to a weighted average operator. Can handle linguistic (qualitative) values of allocations.

28 Raning 2. Raning with Choquet Integral Non-linear aggregation operator (more general and axiomatically justified). Description of Discrete Choquet Integral: Denote T = c, c, L, c ) to be the set of criteria, ( 2 ( x, x2, L, x x = ) to be the evaluations on the subject x. A fuzzy measure on power set 2 T satisfies a. µ ( 0) = 0, µ ( T) = and, b. A B implies µ ( A ) µ ( B) for A, B T. Discrete Choquet integral with respect to the fuzzy measure is given by C µ ( x) = ( x µ i= ( 0) = ( i) x( i ) ) ( A( i) ) with x 0, and A = c, L, c }, where x, L, x ) is raned x, L, x ) in ( i) { ( i) ( ) ( ( ) ( ) ( increasing order, c, L, c } is the subset of criteria corresponding to x, L, x } { ( i) ( ) { ( i) ( )

29 Raning Example: Ran ten 5-component consequence vectors, with the fuzzy measure µ=[ ]: Consequence vectors Choquet integrals Ran Consequenc e vectors Choquet integrals Ran (4,,,,) (3,,,,2) (,2,,,2) (5,,,,) 6.74 (3,2,,,) (0,3,,,2) (9,,,,4) (2,3,,,) (,,,,3) (6,2,,,5)

30 Raning 3. Identification of fuzzy measures Fuzzy measures have to satisfy the monotone constraints. Two methods to identify fuzzy measures µ: i) Supervised learning a) Quadratic programming b) Neural networ ii) Unsupervised learning (Method of Entropy) Viewing criteria as a random vector, and allocations as random sample. Estimating all joint partial density functions. Using entropies of subsets of criteria as fuzzy measure value.

31 Raning a. Identification of fuzzy measures with quadratic programming For the following data x =(.9.7.5) x 2 =( ) x 3 =( ) x 4 =( ) x 5 =( ) x 6 =( ) x 7 =( ) x 8 =(.5.5.) x 9 =( ) Y = (y, y 2,,y 9 ) =( ). By our algorithm, we get µ = [ ] The corresponding quadratic error is

32 Raning b. Identification of fuzzy measures with neural networs Fuzzy measures are set up as weights of a feed forward neural networ, which are found by training the networ with bacpropagation algorithm. (Wang and Wang, 997).

33 Raning Example: identification of fuzzy measures with neural networs Sample Feature Feature 2 Feature 3 Evaluation Our networ produces the following fuzzy measure: µ{}= 0, µ{}= 0.2, µ{2}= 0., µ{,2}=0.3386, µ{3}= 0.399, µ{,3}= , µ{2,3}= µ{,2,3}=, which satisfies the monotone constraints.

34 Conclusions and Future Wor Data structure Experts and decision-maers assistance Optimization methods Raning procedures Some other issues

35 References [] Deb, K. (2002). Multi-Objective Optimization Using Evolutionary Algorithms. J. Wiley. [2] Grabisch, M., Nguyen, H.T. and Waler, E.A. (994). Fundamentals of Uncertainty Calculi with Applications to Fuzzy Inference. Kluwer Academic. [3] Keeney, R.L., and Raiffa (993), H. Decisions with Multiple Objectives. Cambridge University Press. [4] Marya, J. L. and Chin, D. C. (200). Global Random Optimization by Simultaneous Perturbation Stochastic Approximation. Proceedings of the American Control Conference [5] Robbins, H. and Monro,S. (95). A stochastic approximation method. Ann. Math. Statist (22), no 3, [6] Saaty Thomas L., (2000), Fundamentals of Decision maing and Priority Theory, RWS publications. [7] Spall, J.C. (992), "Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation," IEEE Trans. Automat. Control, 37, [8] Verias, V., Lipnicas, A. and Malmqvist, K. (2000). Fuzzy measures in neural networ fusion. Proceedings of the 7th International Conference on Neural Information Processing, ICONIP-2000, [9] Yin, G. (999), "Rates of Convergence for a Class of Global Stochastic Optimization algorithms," SIAM J. Optim.,0, [0] Wang, J. and Wang, Z. (997) Using neural networ to determine Sugeno measure by statistics. Neural Networ (0), No,

CONSTRAINED OPTIMIZATION OVER DISCRETE SETS VIA SPSA WITH APPLICATION TO NON-SEPARABLE RESOURCE ALLOCATION

CONSTRAINED OPTIMIZATION OVER DISCRETE SETS VIA SPSA WITH APPLICATION TO NON-SEPARABLE RESOURCE ALLOCATION Proceedings of the 200 Winter Simulation Conference B. A. Peters, J. S. Smith, D. J. Medeiros, and M. W. Rohrer, eds. CONSTRAINED OPTIMIZATION OVER DISCRETE SETS VIA SPSA WITH APPLICATION TO NON-SEPARABLE

More information

Recent Advances in SPSA at the Extremes: Adaptive Methods for Smooth Problems and Discrete Methods for Non-Smooth Problems

Recent Advances in SPSA at the Extremes: Adaptive Methods for Smooth Problems and Discrete Methods for Non-Smooth Problems Recent Advances in SPSA at the Extremes: Adaptive Methods for Smooth Problems and Discrete Methods for Non-Smooth Problems SGM2014: Stochastic Gradient Methods IPAM, February 24 28, 2014 James C. Spall

More information

An artificial neural networks (ANNs) model is a functional abstraction of the

An artificial neural networks (ANNs) model is a functional abstraction of the CHAPER 3 3. Introduction An artificial neural networs (ANNs) model is a functional abstraction of the biological neural structures of the central nervous system. hey are composed of many simple and highly

More information

An axiomatic approach of the discrete Sugeno integral as a tool to aggregate interacting criteria in a qualitative framework

An axiomatic approach of the discrete Sugeno integral as a tool to aggregate interacting criteria in a qualitative framework An axiomatic approach of the discrete Sugeno integral as a tool to aggregate interacting criteria in a qualitative framework Jean-Luc Marichal Revised version, September 20, 2000 Abstract We present a

More information

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C.

Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Condensed Table of Contents for Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control by J. C. Spall John Wiley and Sons, Inc., 2003 Preface... xiii 1. Stochastic Search

More information

Multi-Objective Optimization Methods for Optimal Funding Allocations to Mitigate Chemical and Biological Attacks

Multi-Objective Optimization Methods for Optimal Funding Allocations to Mitigate Chemical and Biological Attacks Multi-Objective Optimization Methods for Optimal Funding Allocations to Mitigate Chemical and Biological Attacks Roshan Rammohan, Ryan Schnalzer Mahmoud Reda Taha, Tim Ross and Frank Gilfeather University

More information

Application of Fuzzy Measure and Fuzzy Integral in Students Failure Decision Making

Application of Fuzzy Measure and Fuzzy Integral in Students Failure Decision Making IOSR Journal of Mathematics (IOSR-JM) e-issn: 2278-5728, p-issn: 2319-765X. Volume 10, Issue 6 Ver. III (Nov - Dec. 2014), PP 47-53 Application of Fuzzy Measure and Fuzzy Integral in Students Failure Decision

More information

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1455 1475 ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC

More information

Advances in the Use of MCDA Methods in Decision-Making

Advances in the Use of MCDA Methods in Decision-Making Advances in the Use of MCDA Methods in Decision-Making JOSÉ RUI FIGUEIRA CEG-IST, Center for Management Studies, Instituto Superior Técnico, Lisbon LAMSADE, Université Paris-Dauphine (E-mail : figueira@ist.utl.pt)

More information

Pattern Classification

Pattern Classification Pattern Classification All materials in these slides were taen from Pattern Classification (2nd ed) by R. O. Duda,, P. E. Hart and D. G. Stor, John Wiley & Sons, 2000 with the permission of the authors

More information

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE

A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE A FUZZY NEURAL NETWORK MODEL FOR FORECASTING STOCK PRICE Li Sheng Institute of intelligent information engineering Zheiang University Hangzhou, 3007, P. R. China ABSTRACT In this paper, a neural network-driven

More information

/97/$10.00 (c) 1997 AACC

/97/$10.00 (c) 1997 AACC Optimal Random Perturbations for Stochastic Approximation using a Simultaneous Perturbation Gradient Approximation 1 PAYMAN SADEGH, and JAMES C. SPALL y y Dept. of Mathematical Modeling, Technical University

More information

Multilayer Perceptron = FeedForward Neural Network

Multilayer Perceptron = FeedForward Neural Network Multilayer Perceptron = FeedForward Neural Networ History Definition Classification = feedforward operation Learning = bacpropagation = local optimization in the space of weights Pattern Classification

More information

Linear Least-Squares Based Methods for Neural Networks Learning

Linear Least-Squares Based Methods for Neural Networks Learning Linear Least-Squares Based Methods for Neural Networks Learning Oscar Fontenla-Romero 1, Deniz Erdogmus 2, JC Principe 2, Amparo Alonso-Betanzos 1, and Enrique Castillo 3 1 Laboratory for Research and

More information

GLR-Entropy Model for ECG Arrhythmia Detection

GLR-Entropy Model for ECG Arrhythmia Detection , pp.363-370 http://dx.doi.org/0.4257/ijca.204.7.2.32 GLR-Entropy Model for ECG Arrhythmia Detection M. Ali Majidi and H. SadoghiYazdi,2 Department of Computer Engineering, Ferdowsi University of Mashhad,

More information

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions

Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.

More information

A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games

A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games International Journal of Fuzzy Systems manuscript (will be inserted by the editor) A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games Mostafa D Awheda Howard M Schwartz Received:

More information

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form

LECTURE # - NEURAL COMPUTATION, Feb 04, Linear Regression. x 1 θ 1 output... θ M x M. Assumes a functional form LECTURE # - EURAL COPUTATIO, Feb 4, 4 Linear Regression Assumes a functional form f (, θ) = θ θ θ K θ (Eq) where = (,, ) are the attributes and θ = (θ, θ, θ ) are the function parameters Eample: f (, θ)

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Recursive Algorithms - Han-Fu Chen CONROL SYSEMS, ROBOICS, AND AUOMAION - Vol. V - Recursive Algorithms - Han-Fu Chen RECURSIVE ALGORIHMS Han-Fu Chen Institute of Systems Science, Academy of Mathematics and Systems Science, Chinese Academy

More information

Beyond stochastic gradient descent for large-scale machine learning

Beyond stochastic gradient descent for large-scale machine learning Beyond stochastic gradient descent for large-scale machine learning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France Joint work with Eric Moulines - October 2014 Big data revolution? A new

More information

Adaptive State Feedback Nash Strategies for Linear Quadratic Discrete-Time Games

Adaptive State Feedback Nash Strategies for Linear Quadratic Discrete-Time Games Adaptive State Feedbac Nash Strategies for Linear Quadratic Discrete-Time Games Dan Shen and Jose B. Cruz, Jr. Intelligent Automation Inc., Rocville, MD 2858 USA (email: dshen@i-a-i.com). The Ohio State

More information

A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL FUZZY CLUSTERING *

A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL FUZZY CLUSTERING * No.2, Vol.1, Winter 2012 2012 Published by JSES. A SEASONAL FUZZY TIME SERIES FORECASTING METHOD BASED ON GUSTAFSON-KESSEL * Faruk ALPASLAN a, Ozge CAGCAG b Abstract Fuzzy time series forecasting methods

More information

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable

More information

Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning

Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning Combine Monte Carlo with Exhaustive Search: Effective Variational Inference and Policy Gradient Reinforcement Learning Michalis K. Titsias Department of Informatics Athens University of Economics and Business

More information

Tuning of Fuzzy Systems as an Ill-Posed Problem

Tuning of Fuzzy Systems as an Ill-Posed Problem Tuning of Fuzzy Systems as an Ill-Posed Problem Martin Burger 1, Josef Haslinger 2, and Ulrich Bodenhofer 2 1 SFB F 13 Numerical and Symbolic Scientific Computing and Industrial Mathematics Institute,

More information

An axiomatic approach of the discrete Choquet integral as a tool to aggregate interacting criteria

An axiomatic approach of the discrete Choquet integral as a tool to aggregate interacting criteria An axiomatic approach of the discrete Choquet integral as a tool to aggregate interacting criteria Jean-Luc Marichal Revised version, December 22, 999 Abstract The most often used operator to aggregate

More information

Gradient-based Adaptive Stochastic Search

Gradient-based Adaptive Stochastic Search 1 / 41 Gradient-based Adaptive Stochastic Search Enlu Zhou H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology November 5, 2014 Outline 2 / 41 1 Introduction

More information

On the VC-Dimension of the Choquet Integral

On the VC-Dimension of the Choquet Integral On the VC-Dimension of the Choquet Integral Eyke Hüllermeier and Ali Fallah Tehrani Department of Mathematics and Computer Science University of Marburg, Germany {eyke,fallah}@mathematik.uni-marburg.de

More information

Convergence of Simultaneous Perturbation Stochastic Approximation for Nondifferentiable Optimization

Convergence of Simultaneous Perturbation Stochastic Approximation for Nondifferentiable Optimization 1 Convergence of Simultaneous Perturbation Stochastic Approximation for Nondifferentiable Optimization Ying He yhe@engr.colostate.edu Electrical and Computer Engineering Colorado State University Fort

More information

10-701/15-781, Machine Learning: Homework 4

10-701/15-781, Machine Learning: Homework 4 10-701/15-781, Machine Learning: Homewor 4 Aarti Singh Carnegie Mellon University ˆ The assignment is due at 10:30 am beginning of class on Mon, Nov 15, 2010. ˆ Separate you answers into five parts, one

More information

Multilayer Perceptron

Multilayer Perceptron Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Single Perceptron 3 Boolean Function Learning 4

More information

Adaptive Crowdsourcing via EM with Prior

Adaptive Crowdsourcing via EM with Prior Adaptive Crowdsourcing via EM with Prior Peter Maginnis and Tanmay Gupta May, 205 In this work, we make two primary contributions: derivation of the EM update for the shifted and rescaled beta prior and

More information

Multicriteria Decision Making Based on Fuzzy Relations

Multicriteria Decision Making Based on Fuzzy Relations BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 8, No 4 Sofia 2008 Multicriteria Decision Maing Based on Fuzzy Relations Vania Peneva, Ivan Popchev Institute of Information

More information

State Estimation of Linear and Nonlinear Dynamic Systems

State Estimation of Linear and Nonlinear Dynamic Systems State Estimation of Linear and Nonlinear Dynamic Systems Part I: Linear Systems with Gaussian Noise James B. Rawlings and Fernando V. Lima Department of Chemical and Biological Engineering University of

More information

Cognitive Cyber-Physical System

Cognitive Cyber-Physical System Cognitive Cyber-Physical System Physical to Cyber-Physical The emergence of non-trivial embedded sensor units, networked embedded systems and sensor/actuator networks has made possible the design and implementation

More information

REMOVING INCONSISTENCY IN PAIRWISE COMPARISON MATRIX IN THE AHP

REMOVING INCONSISTENCY IN PAIRWISE COMPARISON MATRIX IN THE AHP MULTIPLE CRITERIA DECISION MAKING Vol. 11 2016 Sławomir Jarek * REMOVING INCONSISTENCY IN PAIRWISE COMPARISON MATRIX IN THE AHP DOI: 10.22367/mcdm.2016.11.05 Abstract The Analytic Hierarchy Process (AHP)

More information

Tutorial on Approximate Bayesian Computation

Tutorial on Approximate Bayesian Computation Tutorial on Approximate Bayesian Computation Michael Gutmann https://sites.google.com/site/michaelgutmann University of Helsinki Aalto University Helsinki Institute for Information Technology 16 May 2016

More information

Wavelet-Based Nonparametric Modeling of Hierarchical Functions in Colon Carcinogenesis

Wavelet-Based Nonparametric Modeling of Hierarchical Functions in Colon Carcinogenesis Wavelet-Based Nonparametric Modeling of Hierarchical Functions in Colon Carcinogenesis Jeffrey S. Morris University of Texas, MD Anderson Cancer Center Joint wor with Marina Vannucci, Philip J. Brown,

More information

Fuzzy Logic. An introduction. Universitat Politécnica de Catalunya. Departament de Teoria del Senyal i Comunicacions.

Fuzzy Logic. An introduction. Universitat Politécnica de Catalunya. Departament de Teoria del Senyal i Comunicacions. Universitat Politécnica de Catalunya Departament de Teoria del Senyal i Comunicacions Fuzzy Logic An introduction Prepared by Temko Andrey 2 Outline History and sphere of applications Basics. Fuzzy sets

More information

arxiv: v1 [cs.lg] 23 Oct 2017

arxiv: v1 [cs.lg] 23 Oct 2017 Accelerated Reinforcement Learning K. Lakshmanan Department of Computer Science and Engineering Indian Institute of Technology (BHU), Varanasi, India Email: lakshmanank.cse@itbhu.ac.in arxiv:1710.08070v1

More information

Neural Networks Lecture 4: Radial Bases Function Networks

Neural Networks Lecture 4: Radial Bases Function Networks Neural Networks Lecture 4: Radial Bases Function Networks H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi

More information

AGGREGATION OPERATORS FOR MULTICRITERIA DECISION AID. Jean-Luc Marichal University of Liège

AGGREGATION OPERATORS FOR MULTICRITERIA DECISION AID. Jean-Luc Marichal University of Liège AGGREGATION OPERATORS FOR MULTICRITERIA DECISION AID by Jean-Luc Marichal University of Liège 1 2 1. Aggregation in MCDM Set of alternatives A = {a, b, c,...} Set of criteria N = {1,..., n}. For all i

More information

The Arithmetic Recursive Average as an Instance of the Recursive Weighted Power Mean

The Arithmetic Recursive Average as an Instance of the Recursive Weighted Power Mean The Arithmetic Recursive Average as an Instance of the Recursive Weighted Power Mean Christian Wagner Inst. for Computing & Cybersystems Michigan Technological University, USA and LUCID, School of Computer

More information

Machine Learning (CS 567) Lecture 2

Machine Learning (CS 567) Lecture 2 Machine Learning (CS 567) Lecture 2 Time: T-Th 5:00pm - 6:20pm Location: GFS118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol

More information

Reinforcement Learning and NLP

Reinforcement Learning and NLP 1 Reinforcement Learning and NLP Kapil Thadani kapil@cs.columbia.edu RESEARCH Outline 2 Model-free RL Markov decision processes (MDPs) Derivative-free optimization Policy gradients Variance reduction Value

More information

On tolerant or intolerant character of interacting criteria in aggregation by the Choquet integral

On tolerant or intolerant character of interacting criteria in aggregation by the Choquet integral On tolerant or intolerant character of interacting criteria in aggregation by the Choquet integral Jean-Luc Marichal Department of Mathematics, Brigham Young University 292 TMCB, Provo, Utah 84602, U.S.A.

More information

Variations of non-additive measures

Variations of non-additive measures Variations of non-additive measures Endre Pap Department of Mathematics and Informatics, University of Novi Sad Trg D. Obradovica 4, 21 000 Novi Sad, Serbia and Montenegro e-mail: pape@eunet.yu Abstract:

More information

Different Criteria for Active Learning in Neural Networks: A Comparative Study

Different Criteria for Active Learning in Neural Networks: A Comparative Study Different Criteria for Active Learning in Neural Networks: A Comparative Study Jan Poland and Andreas Zell University of Tübingen, WSI - RA Sand 1, 72076 Tübingen, Germany Abstract. The field of active

More information

Lecture 4 September 15

Lecture 4 September 15 IFT 6269: Probabilistic Graphical Models Fall 2017 Lecture 4 September 15 Lecturer: Simon Lacoste-Julien Scribe: Philippe Brouillard & Tristan Deleu 4.1 Maximum Likelihood principle Given a parametric

More information

OPTIMIZATION WITH DISCRETE SIMULTANEOUS PERTURBATION STOCHASTIC APPROXIMATION USING NOISY LOSS FUNCTION MEASUREMENTS. by Qi Wang

OPTIMIZATION WITH DISCRETE SIMULTANEOUS PERTURBATION STOCHASTIC APPROXIMATION USING NOISY LOSS FUNCTION MEASUREMENTS. by Qi Wang OPTIMIZATION WITH DISCRETE SIMULTANEOUS PERTURBATION STOCHASTIC APPROXIMATION USING NOISY LOSS FUNCTION MEASUREMENTS by Qi Wang A dissertation submitted to Johns Hopins University in conformity with the

More information

Advanced Machine Learning

Advanced Machine Learning Advanced Machine Learning Lecture 4: Deep Learning Essentials Pierre Geurts, Gilles Louppe, Louis Wehenkel 1 / 52 Outline Goal: explain and motivate the basic constructs of neural networks. From linear

More information

Artificial Neural Network : Training

Artificial Neural Network : Training Artificial Neural Networ : Training Debasis Samanta IIT Kharagpur debasis.samanta.iitgp@gmail.com 06.04.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 06.04.2018 1 / 49 Learning of neural

More information

Non Additive Measures for Group Multi Attribute Decision Models

Non Additive Measures for Group Multi Attribute Decision Models Non Additive Measures for Group Multi Attribute Decision Models Marta Cardin, Silvio Giove Dept. of Applied Mathematics, University of Venice Venice, Italy Email: mcardin@unive.it, sgiove@unive.it Abstract

More information

Neural Networks and Ensemble Methods for Classification

Neural Networks and Ensemble Methods for Classification Neural Networks and Ensemble Methods for Classification NEURAL NETWORKS 2 Neural Networks A neural network is a set of connected input/output units (neurons) where each connection has a weight associated

More information

Chapter 2 An Overview of Multiple Criteria Decision Aid

Chapter 2 An Overview of Multiple Criteria Decision Aid Chapter 2 An Overview of Multiple Criteria Decision Aid Abstract This chapter provides an overview of the multicriteria decision aid paradigm. The discussion covers the main features and concepts in the

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Introduction Goal: Classify objects by learning nonlinearity There are many problems for which linear discriminants are insufficient for minimum error In previous methods, the

More information

Neutron inverse kinetics via Gaussian Processes

Neutron inverse kinetics via Gaussian Processes Neutron inverse kinetics via Gaussian Processes P. Picca Politecnico di Torino, Torino, Italy R. Furfaro University of Arizona, Tucson, Arizona Outline Introduction Review of inverse kinetics techniques

More information

Using the Kappalab R package for capacity identification in Choquet integral based MAUT

Using the Kappalab R package for capacity identification in Choquet integral based MAUT Using the Kappalab R package for capacity identification in Choquet integral based MAUT Michel Grabisch Université de Paris I, LIP6 8 rue du Capitaine Scott 75015 Paris, France michel.grabisch@lip6.fr

More information

The Behaviour of the Akaike Information Criterion when Applied to Non-nested Sequences of Models

The Behaviour of the Akaike Information Criterion when Applied to Non-nested Sequences of Models The Behaviour of the Akaike Information Criterion when Applied to Non-nested Sequences of Models Centre for Molecular, Environmental, Genetic & Analytic (MEGA) Epidemiology School of Population Health

More information

CLOSE-TO-CLEAN REGULARIZATION RELATES

CLOSE-TO-CLEAN REGULARIZATION RELATES Worshop trac - ICLR 016 CLOSE-TO-CLEAN REGULARIZATION RELATES VIRTUAL ADVERSARIAL TRAINING, LADDER NETWORKS AND OTHERS Mudassar Abbas, Jyri Kivinen, Tapani Raio Department of Computer Science, School of

More information

COMPARISON OF GMM WITH SECOND-ORDER LEAST SQUARES ESTIMATION IN NONLINEAR MODELS. Abstract

COMPARISON OF GMM WITH SECOND-ORDER LEAST SQUARES ESTIMATION IN NONLINEAR MODELS. Abstract Far East J. Theo. Stat. 0() (006), 179-196 COMPARISON OF GMM WITH SECOND-ORDER LEAST SQUARES ESTIMATION IN NONLINEAR MODELS Department of Statistics University of Manitoba Winnipeg, Manitoba, Canada R3T

More information

Bayesian Reasoning and Recognition

Bayesian Reasoning and Recognition Intelligent Systems: Reasoning and Recognition James L. Crowley ENSIAG 2 / osig 1 Second Semester 2013/2014 Lesson 12 28 arch 2014 Bayesian Reasoning and Recognition Notation...2 Pattern Recognition...3

More information

Multilayer Neural Networks

Multilayer Neural Networks Multilayer Neural Networks Multilayer Neural Networks Discriminant function flexibility NON-Linear But with sets of linear parameters at each layer Provably general function approximators for sufficient

More information

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata

An Evolution Strategy for the Induction of Fuzzy Finite-state Automata Journal of Mathematics and Statistics 2 (2): 386-390, 2006 ISSN 1549-3644 Science Publications, 2006 An Evolution Strategy for the Induction of Fuzzy Finite-state Automata 1,2 Mozhiwen and 1 Wanmin 1 College

More information

Non-linear Supervised High Frequency Trading Strategies with Applications in US Equity Markets

Non-linear Supervised High Frequency Trading Strategies with Applications in US Equity Markets Non-linear Supervised High Frequency Trading Strategies with Applications in US Equity Markets Nan Zhou, Wen Cheng, Ph.D. Associate, Quantitative Research, J.P. Morgan nan.zhou@jpmorgan.com The 4th Annual

More information

Related Concepts: Lecture 9 SEM, Statistical Modeling, AI, and Data Mining. I. Terminology of SEM

Related Concepts: Lecture 9 SEM, Statistical Modeling, AI, and Data Mining. I. Terminology of SEM Lecture 9 SEM, Statistical Modeling, AI, and Data Mining I. Terminology of SEM Related Concepts: Causal Modeling Path Analysis Structural Equation Modeling Latent variables (Factors measurable, but thru

More information

Data Mining. Preamble: Control Application. Industrial Researcher s Approach. Practitioner s Approach. Example. Example. Goal: Maintain T ~Td

Data Mining. Preamble: Control Application. Industrial Researcher s Approach. Practitioner s Approach. Example. Example. Goal: Maintain T ~Td Data Mining Andrew Kusiak 2139 Seamans Center Iowa City, Iowa 52242-1527 Preamble: Control Application Goal: Maintain T ~Td Tel: 319-335 5934 Fax: 319-335 5669 andrew-kusiak@uiowa.edu http://www.icaen.uiowa.edu/~ankusiak

More information

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM.

1 EM algorithm: updating the mixing proportions {π k } ik are the posterior probabilities at the qth iteration of EM. Université du Sud Toulon - Var Master Informatique Probabilistic Learning and Data Analysis TD: Model-based clustering by Faicel CHAMROUKHI Solution The aim of this practical wor is to show how the Classification

More information

Additive Consistency of Fuzzy Preference Relations: Characterization and Construction. Extended Abstract

Additive Consistency of Fuzzy Preference Relations: Characterization and Construction. Extended Abstract Additive Consistency of Fuzzy Preference Relations: Characterization and Construction F. Herrera a, E. Herrera-Viedma a, F. Chiclana b Dept. of Computer Science and Artificial Intelligence a University

More information

Kernel Learning via Random Fourier Representations

Kernel Learning via Random Fourier Representations Kernel Learning via Random Fourier Representations L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Module 5: Machine Learning L. Law, M. Mider, X. Miscouridou, S. Ip, A. Wang Kernel Learning via Random

More information

Autoencoders and Score Matching. Based Models. Kevin Swersky Marc Aurelio Ranzato David Buchman Benjamin M. Marlin Nando de Freitas

Autoencoders and Score Matching. Based Models. Kevin Swersky Marc Aurelio Ranzato David Buchman Benjamin M. Marlin Nando de Freitas On for Energy Based Models Kevin Swersky Marc Aurelio Ranzato David Buchman Benjamin M. Marlin Nando de Freitas Toronto Machine Learning Group Meeting, 2011 Motivation Models Learning Goal: Unsupervised

More information

Part 8: Neural Networks

Part 8: Neural Networks METU Informatics Institute Min720 Pattern Classification ith Bio-Medical Applications Part 8: Neural Netors - INTRODUCTION: BIOLOGICAL VS. ARTIFICIAL Biological Neural Netors A Neuron: - A nerve cell as

More information

FOMBA Soumana, Toulouse University, France and Bamako University, Mali ZARATE Pascale, Toulouse University, France KILGOUR Marc, Wilfrid Laurier

FOMBA Soumana, Toulouse University, France and Bamako University, Mali ZARATE Pascale, Toulouse University, France KILGOUR Marc, Wilfrid Laurier FOMBA Soumana, Toulouse University, France and Bamako University, Mali ZARATE Pascale, Toulouse University, France KILGOUR Marc, Wilfrid Laurier University, Waterloo, Canada 1 Introduction Aggregation

More information

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann (Feed-Forward) Neural Networks 2016-12-06 Dr. Hajira Jabeen, Prof. Jens Lehmann Outline In the previous lectures we have learned about tensors and factorization methods. RESCAL is a bilinear model for

More information

Stochastic gradient descent and robustness to ill-conditioning

Stochastic gradient descent and robustness to ill-conditioning Stochastic gradient descent and robustness to ill-conditioning Francis Bach INRIA - Ecole Normale Supérieure, Paris, France ÉCOLE NORMALE SUPÉRIEURE Joint work with Aymeric Dieuleveut, Nicolas Flammarion,

More information

Enhancing a Model-Free Adaptive Controller through Evolutionary Computation

Enhancing a Model-Free Adaptive Controller through Evolutionary Computation Enhancing a Model-Free Adaptive Controller through Evolutionary Computation Anthony Clark, Philip McKinley, and Xiaobo Tan Michigan State University, East Lansing, USA Aquatic Robots Practical uses autonomous

More information

Application of Evidence Theory to Construction Projects

Application of Evidence Theory to Construction Projects Application of Evidence Theory to Construction Projects Desmond Adair, University of Tasmania, Australia Martin Jaeger, University of Tasmania, Australia Abstract: Crucial decisions are necessary throughout

More information

Q-Learning in Continuous State Action Spaces

Q-Learning in Continuous State Action Spaces Q-Learning in Continuous State Action Spaces Alex Irpan alexirpan@berkeley.edu December 5, 2015 Contents 1 Introduction 1 2 Background 1 3 Q-Learning 2 4 Q-Learning In Continuous Spaces 4 5 Experimental

More information

FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS APPLICATION TO MEDICAL IMAGE ANALYSIS OF LIVER CANCER. Tadashi Kondo and Junji Ueno

FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS APPLICATION TO MEDICAL IMAGE ANALYSIS OF LIVER CANCER. Tadashi Kondo and Junji Ueno International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 3(B), March 2012 pp. 2285 2300 FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS

More information

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning

Lecture 10. Neural networks and optimization. Machine Learning and Data Mining November Nando de Freitas UBC. Nonlinear Supervised Learning Lecture 0 Neural networks and optimization Machine Learning and Data Mining November 2009 UBC Gradient Searching for a good solution can be interpreted as looking for a minimum of some error (loss) function

More information

Dynamic Data Modeling, Recognition, and Synthesis. Rui Zhao Thesis Defense Advisor: Professor Qiang Ji

Dynamic Data Modeling, Recognition, and Synthesis. Rui Zhao Thesis Defense Advisor: Professor Qiang Ji Dynamic Data Modeling, Recognition, and Synthesis Rui Zhao Thesis Defense Advisor: Professor Qiang Ji Contents Introduction Related Work Dynamic Data Modeling & Analysis Temporal localization Insufficient

More information

Chapter 6: Derivative-Based. optimization 1

Chapter 6: Derivative-Based. optimization 1 Chapter 6: Derivative-Based Optimization Introduction (6. Descent Methods (6. he Method of Steepest Descent (6.3 Newton s Methods (NM (6.4 Step Size Determination (6.5 Nonlinear Least-Squares Problems

More information

Generalization and Function Approximation

Generalization and Function Approximation Generalization and Function Approximation 0 Generalization and Function Approximation Suggested reading: Chapter 8 in R. S. Sutton, A. G. Barto: Reinforcement Learning: An Introduction MIT Press, 1998.

More information

A characterization of the 2-additive Choquet integral

A characterization of the 2-additive Choquet integral A characterization of the 2-additive Choquet integral Brice Mayag Thales R & T University of Paris I brice.mayag@thalesgroup.com Michel Grabisch University of Paris I Centre d Economie de la Sorbonne 106-112

More information

Chapter 4 Neural Networks in System Identification

Chapter 4 Neural Networks in System Identification Chapter 4 Neural Networks in System Identification Gábor HORVÁTH Department of Measurement and Information Systems Budapest University of Technology and Economics Magyar tudósok körútja 2, 52 Budapest,

More information

Experiments on the Consciousness Prior

Experiments on the Consciousness Prior Yoshua Bengio and William Fedus UNIVERSITÉ DE MONTRÉAL, MILA Abstract Experiments are proposed to explore a novel prior for representation learning, which can be combined with other priors in order to

More information

8. Lecture Neural Networks

8. Lecture Neural Networks Soft Control (AT 3, RMA) 8. Lecture Neural Networks Learning Process Contents of the 8 th lecture 1. Introduction of Soft Control: Definition and Limitations, Basics of Intelligent" Systems 2. Knowledge

More information

Ranking Multicriteria Alternatives: the method ZAPROS III. Institute for Systems Analysis, 9, pr. 60 let Octjabrja, Moscow, , Russia.

Ranking Multicriteria Alternatives: the method ZAPROS III. Institute for Systems Analysis, 9, pr. 60 let Octjabrja, Moscow, , Russia. Ranking Multicriteria Alternatives: the method ZAPROS III Oleg I. Larichev, Institute for Systems Analysis, 9, pr. 60 let Octabra, Moscow, 117312, Russia. Abstract: The new version of the method for the

More information

Distributed Power Control for Time Varying Wireless Networks: Optimality and Convergence

Distributed Power Control for Time Varying Wireless Networks: Optimality and Convergence Distributed Power Control for Time Varying Wireless Networks: Optimality and Convergence Tim Holliday, Nick Bambos, Peter Glynn, Andrea Goldsmith Stanford University Abstract This paper presents a new

More information

Machine Learning I Continuous Reinforcement Learning

Machine Learning I Continuous Reinforcement Learning Machine Learning I Continuous Reinforcement Learning Thomas Rückstieß Technische Universität München January 7/8, 2010 RL Problem Statement (reminder) state s t+1 ENVIRONMENT reward r t+1 new step r t

More information

Notation. Pattern Recognition II. Michal Haindl. Outline - PR Basic Concepts. Pattern Recognition Notions

Notation. Pattern Recognition II. Michal Haindl. Outline - PR Basic Concepts. Pattern Recognition Notions Notation S pattern space X feature vector X = [x 1,...,x l ] l = dim{x} number of features X feature space K number of classes ω i class indicator Ω = {ω 1,...,ω K } g(x) discriminant function H decision

More information

Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network

Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network Convergence of Hybrid Algorithm with Adaptive Learning Parameter for Multilayer Neural Network Fadwa DAMAK, Mounir BEN NASR, Mohamed CHTOUROU Department of Electrical Engineering ENIS Sfax, Tunisia {fadwa_damak,

More information

Evolutionary Algorithms

Evolutionary Algorithms Evolutionary Algorithms a short introduction Giuseppe Narzisi Courant Institute of Mathematical Sciences New York University 31 January 2008 Outline 1 Evolution 2 Evolutionary Computation 3 Evolutionary

More information

Brief Introduction of Machine Learning Techniques for Content Analysis

Brief Introduction of Machine Learning Techniques for Content Analysis 1 Brief Introduction of Machine Learning Techniques for Content Analysis Wei-Ta Chu 2008/11/20 Outline 2 Overview Gaussian Mixture Model (GMM) Hidden Markov Model (HMM) Support Vector Machine (SVM) Overview

More information

Method for Optimizing the Number and Precision of Interval-Valued Parameters in a Multi-Object System

Method for Optimizing the Number and Precision of Interval-Valued Parameters in a Multi-Object System Method for Optimizing the Number and Precision of Interval-Valued Parameters in a Multi-Object System AMAURY CABALLERO*, KANG YEN*, *, JOSE L. ABREU**, ALDO PARDO*** *Department of Electrical & Computer

More information

Group Decision-Making with Incomplete Fuzzy Linguistic Preference Relations

Group Decision-Making with Incomplete Fuzzy Linguistic Preference Relations Group Decision-Making with Incomplete Fuzzy Linguistic Preference Relations S. Alonso Department of Software Engineering University of Granada, 18071, Granada, Spain; salonso@decsai.ugr.es, F.J. Cabrerizo

More information

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics

STA414/2104. Lecture 11: Gaussian Processes. Department of Statistics STA414/2104 Lecture 11: Gaussian Processes Department of Statistics www.utstat.utoronto.ca Delivered by Mark Ebden with thanks to Russ Salakhutdinov Outline Gaussian Processes Exam review Course evaluations

More information

APPLYING QUANTUM COMPUTER FOR THE REALIZATION OF SPSA ALGORITHM Oleg Granichin, Alexey Wladimirovich

APPLYING QUANTUM COMPUTER FOR THE REALIZATION OF SPSA ALGORITHM Oleg Granichin, Alexey Wladimirovich APPLYING QUANTUM COMPUTER FOR THE REALIZATION OF SPSA ALGORITHM Oleg Granichin, Alexey Wladimirovich Department of Mathematics and Mechanics St. Petersburg State University Abstract The estimates of the

More information

Process Integration Methods

Process Integration Methods Process Integration Methods Expert Systems automatic Knowledge Based Systems Optimization Methods qualitative Hierarchical Analysis Heuristic Methods Thermodynamic Methods Rules of Thumb interactive Stochastic

More information

CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR

CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR CHAPTER 4 FUZZY AND NEURAL NETWORK FOR SR MOTOR 4.1 Introduction Fuzzy Logic control is based on fuzzy set theory. A fuzzy set is a set having uncertain and imprecise nature of abstract thoughts, concepts

More information