CS 354R: Computer Game Technology

Similar documents
AN INTRODUCTION TO NEURAL NETWORKS. Scott Kuindersma November 12, 2009

This time: Fuzzy Logic and Fuzzy Inference

Handling Uncertainty using FUZZY LOGIC

Unit 8: Introduction to neural networks. Perceptrons

Artificial neural networks

This time: Fuzzy Logic and Fuzzy Inference

2015 Todd Neller. A.I.M.A. text figures 1995 Prentice Hall. Used by permission. Neural Networks. Todd W. Neller

Lab 5: 16 th April Exercises on Neural Networks

Artificial Intelligence

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Data Mining Part 5. Prediction

CS:4420 Artificial Intelligence

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Simple Neural Nets For Pattern Classification

Simulating Neural Networks. Lawrence Ward P465A

COMP9444 Neural Networks and Deep Learning 2. Perceptrons. COMP9444 c Alan Blair, 2017

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Neural Networks and the Back-propagation Algorithm

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Lecture 4: Perceptrons and Multilayer Perceptrons

ARTIFICIAL INTELLIGENCE. Artificial Neural Networks

AI Programming CS F-20 Neural Networks

Fuzzy Systems. Introduction

Artificial Neural Networks

Reification of Boolean Logic

EEE 241: Linear Systems

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Artificial Neural Network

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Neural Networks. Xiaojin Zhu Computer Sciences Department University of Wisconsin, Madison. slide 1

Neural networks. Chapter 19, Sections 1 5 1

Machine Learning. Neural Networks

Artificial Neural Networks

CS 4700: Foundations of Artificial Intelligence

Lecture 7 Artificial neural networks: Supervised learning

Introduction to Machine Learning

Neural Networks & Fuzzy Logic

HISTORY. In the 70 s vs today variants. (Multi-layer) Feed-forward (perceptron) Hebbian Recurrent

Revision: Neural Network

Input layer. Weight matrix [ ] Output layer

Learning and Neural Networks

Course 395: Machine Learning - Lectures

Uncertainty and Rules

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

Chapter 2 Section 2: Acceleration

Perceptron. (c) Marcin Sydow. Summary. Perceptron

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Fuzzy Systems. Introduction

Sections 18.6 and 18.7 Artificial Neural Networks

Artificial Neuron (Perceptron)

Sections 18.6 and 18.7 Artificial Neural Networks

The Perceptron. Volker Tresp Summer 2016

Artificial Neural Network and Fuzzy Logic

Neural networks. Chapter 20, Section 5 1

Neural Networks and Fuzzy Logic Rajendra Dept.of CSE ASCET

Multilayer Perceptrons and Backpropagation

ME 534. Mechanical Engineering University of Gaziantep. Dr. A. Tolga Bozdana Assistant Professor

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

Artificial Neural Networks. Historical description

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

Artifical Neural Networks

Practicals 5 : Perceptron

Neural Networks: Introduction

Simple Neural Nets for Pattern Classification: McCulloch-Pitts Threshold Logic CS 5870

Pattern Recognition Prof. P. S. Sastry Department of Electronics and Communication Engineering Indian Institute of Science, Bangalore

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Introduction To Artificial Neural Networks

) (d o f. For the previous layer in a neural network (just the rightmost layer if a single neuron), the required update equation is: 2.

Artificial Neural Networks. Edward Gatt

Neural networks. Chapter 20. Chapter 20 1

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Introduction to Artificial Neural Networks

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Neural Networks biological neuron artificial neuron 1

Uncertain Logic with Multiple Predicates

Computational statistics

CSC242: Intro to AI. Lecture 21

COMP-4360 Machine Learning Neural Networks

Today s s lecture. Lecture 16: Uncertainty - 6. Dempster-Shafer Theory. Alternative Models of Dealing with Uncertainty Information/Evidence

MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH

Lecture 1: Introduction & Fuzzy Control I

Models for Inexact Reasoning. Fuzzy Logic Lesson 8 Fuzzy Controllers. Master in Computational Logic Department of Artificial Intelligence

PV021: Neural networks. Tomáš Brázdil

Artificial Neural Networks Examination, June 2005

Chapter 13 Uncertainty

SPSS, University of Texas at Arlington. Topics in Machine Learning-EE 5359 Neural Networks

CS 387/680: GAME AI MOVEMENT

CS 387/680: GAME AI MOVEMENT

CS489/698: Intro to ML

Statistical NLP for the Web

CS 4700: Foundations of Artificial Intelligence

April 9, Depto. de Ing. de Sistemas e Industrial Universidad Nacional de Colombia, Bogotá. Linear Classification Models. Fabio A. González Ph.D.

Feedforward Neural Nets and Backpropagation

Artificial Neural Networks

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence

COGS Q250 Fall Homework 7: Learning in Neural Networks Due: 9:00am, Friday 2nd November.

EPL442: Computational

CMSC 421: Neural Computation. Applications of Neural Networks

Transcription:

CS 354R: Computer Game Technology AI Fuzzy Logic and Neural Nets Fall 2017

Fuzzy Logic Philosophical approach Decisions based on degree of truth Is not a method for reasoning under uncertainty that s probability Crisp Facts distinct boundaries Fuzzy Facts imprecise boundaries Probability - incomplete facts Example Scout reporting an enemy Two tanks at grid NV 54 (Crisp) A few tanks at grid NV 54 (Fuzzy) There might be 2 tanks at grid NV 54 (Probabilistic) 2

Apply to Computer Games Can have different characteristics of players Strength: strong, medium, weak Aggressiveness: meek, medium, nasty If meek and attacked, run away fast If medium and attacked, run away slowly If nasty and strong and attacked, attack back Control of a vehicle Should slow down when close to car in front Should speed up when far behind car in front Provides smoother transitions not a sharp boundary 3

Fuzzy Sets Provides a way to write symbolic rules with terms like medium but evaluate them in a quantified way Classical set theory: An object is either in or not in the set Fuzzy sets have a smooth boundary Not completely in or out somebody 6ft is 80% in the tall set tall Fuzzy set theory An object is in a set by matter of degree 1.0 => in the set 0.0 => not in the set 0.0 < object < 1.0 => partially in the set 4

Example Fuzzy Variable Meek Membership (Degree of Truth) 1.0 Medium Nasty -1-0.5 0.0 0.5 1 Aggressiveness 5

Fuzzy Set Operations: Complement Membership 1.0 FS The degree to which you believe something is not in the set is 1.0 minus the degree to which you believe it is in the set FS 0.0 Units 6

Fuzzy Set Ops: Intersection (AND) Membership 1.0 About 6 Tall If you have x degree of faith in statement A, and y degree of faith in statement B, how much faith do you have in the statement A and B? e.g how much faith in that person is about 6 high and tall 0.0 Height 7

Fuzzy Set Ops: Intersection (AND) Membership 1.0 0.0 About 6 Tall About 6 high and tall Height Assumption: Membership in one set does not affect membership in another Take the min of your beliefs in each individual statement Also works if statements are about different variables Dangerous and injured - belief is the min of the degree to which you believe they are dangerous and injured 8

Fuzzy Set Ops: Union (OR) Membership 1.0 About 6 Tall If you have x degree of faith in statement A, and y degree of faith in statement B, how much faith do you have in the statement A or B? e.g. how much faith in that person is about 6 high or tall 0.0 Height 9

Fuzzy Set Ops: Union (OR) Membership 1.0 0.0 About 6 Tall About 6 high or tall Height Take the max of your beliefs in each individual statement Also works if statements are about different variables Dangerous or injured - belief is the max of the degree to which you believe they are dangerous or injured 10

Activity Consider the fuzzy rules applied to a racing AI. List potential fuzzy variables and fuzzy sets it might use. 11

Fuzzy Rules If our distance to the car in front is small, and the distance is decreasing slowly, then decelerate quite hard Fuzzy variables in blue Fuzzy sets in red We have a certain belief in the truth of the condition, and hence a certain strength of desire for the outcome Multiple rules may match to some degree, so we require a means to arbitrate and choose a particular goal - defuzzification 12

Fuzzy Rules Example (from Game Programming Gems) Rules for controlling a car: Variables are distance to car in front, delta is how fast it s changing, and acceleration is how to apply it Sets are: Very small, small, perfect, big, very big (distance) Shrinking fast, shrinking, stable, growing, growing fast (delta) Brake hard, slow down, none, speed up, floor it (acceleration) Rules for every combination of distance and delta sets define acceleration set 13

Set Definitions v. small small perfect big v. big brake slow none fast floor it distance acceleration << < = > >> delta 14

Instance v. small small perfect big v. big brake slow none fast floor it distance acceleration << < = > >> delta Distance could be considered small or perfect Delta could be stable or growing What is acceleration? 15

Matching Relevant rules are: If distance is small and delta is growing, maintain speed If distance is small and delta is stable, slow down If distance is perfect and delta is growing, speed up If distance is perfect and delta is stable, maintain speed For first rule, distance is small has 0.75 truth, and delta is growing has 0.3 truth So the truth of the AND is 0.3 Other rule strengths are 0.6, 0.1 and 0.1 16

Fuzzy Inference For each rule, clip action fuzzy set by belief in rule small + growing: 0.3 none slow small + stable: 0.6 acceleration acceleration perfect + growing: 0.1 fast none perfect + stable: 0.1 acceleration acceleration 17

Defuzzification Example Three actions (sets) we have reason to believe we should take, and each action covers a range of values (accelerations) Two options in going from current state to a single value: Mean of Max: Take the rule we believe most strongly, and take the (weighted) average of its possible values Center of Mass: Take all the rules we partially believe, and take their weighted average In this example, we slow down either way, but we slow down more with Mean of Max Mean of max is cheaper, but center of mass exploits more information 18

Evaluation of Fuzzy Logic Does not necessarily lead to non-determinism Advantages Allows use of continuous valued actions while still writing crisp rules can accelerate to different degrees Allows use of fuzzy concepts such as medium Biggest impact is for control problems Can avoid discontinuities (but not always) Disadvantages Sometimes results are unexpected and hard to debug Additional computational overhead Other ways to get continuous functions 19

References Nguyen, H. T. and Walker, E. A. A First Course in Fuzzy Logic, CRC Press, 1999. Rao, V. B. and Rao, H. Y. C++ Neural Networks and Fuzzy Logic, IGD Books Worldwide, 1995. McCuskey, M. Fuzzy Logic for Video Games, in Game Programming Gems, Ed. Deloura, Charles River Media, 2000, Section 3, pp. 319-329. 20

Neural Networks Inspired by natural decision making structures (real nervous systems and brains) If you connect lots of simple decision making pieces together, they can make more complex decisions Compose simple functions to produce complex functions Neural networks: Take multiple numeric input variables Produce multiple numeric output values Threshold outputs to turn them into discrete values Map discrete values onto classes, and you have a classifier! Also work as approximation functions 21

Decision Boundary 22

Simulated Neuron - Perceptron Inputs (aj) from other perceptrons with weights (Wi,j) Learning occurs by adjusting the weights Perceptron calculates weighted sum of inputs (ini) Threshold function calculates output (ai) Step function (if ini > t then ai = 1 else ai = 0) Sigmoid g(a) = 1/(1 + e -x ) Output becomes input for next layer of perceptron aj Wi,j Σ Wi,j aj = ini ai ai = g(ini) 23

Learning Neural Networks Learning from examples Examples consist of input t, and correct output o Learn if network s output doesn t match correct output Adjust weights to reduce difference Only change weights a small amount (η) Basic perceptron learning Wi,j = Wi,j + η(t-o)aj If output is too high (t-o) is negative so Wi,j will be reduced If output is too low (t-o) is positive so Wi,j will be increased If aj is negative the opposite happens 24

Neural Net Example Single perceptron to represent OR Two inputs One output (1 if either inputs is 1) Step function (if weighted sum > 0.5 output a 1) Initial state (below) gives error on (1,0) input Training occurs 1 0.1 0 0.6 Σ Wj aj = 0.1 g(0.1) = 0 0 25

Neural Net Training Wj = Wj + η(t-o)aj W1 = 0.1 + 0.1(1-0)1 = 0.2 W2 = 0.6 + 0.1(1-0)0 = 0.6 After this step, try (0,1) 1 example No error, so no training 0 0.2 1 0.6 Σ Wj aj = 0.6 g(0.6) = 0 1 26

Neural Net Training 1 0.2 0 0.6 Σ Wj aj = 0.2 g(0.2) = 0 0 Try (1,0) 1 example Still an error, so training occurs W1 = 0.2 + 0.1(1-0)1 = 0.3 W2 = 0.6 + 0.1(1-0)0 = 0.6 And so on 27

Perceptron Problems Single perceptron can represent AND, OR and NOR What about XOR? Where should the decision boundary be? (AGH University Virtual Laboratory of AI) 28

Solving XOR Combinations of perceptrons (AGH University Virtual Laboratory of AI) 29

Neural Network Structure Perceptron are usually organized in layers Input layer: takes external input Hidden layer(s) Output layer: external output Feed-forward vs. recurrent Feed-forward: outputs only connect to later layer Learning is easier Recurrent: outputs can connect to earlier layers or same layer Internal state 30

Neural network for Quake Four input perceptron One input for each condition Four perceptron hidden layer Fully connected Five output perceptron One output for each action Choose action with highest output Or, probabilistic action selection Choose at random weighted by output Enemy Sound Dead Low Health Attack Wander Spawn Retreat Chase 31

Neural Networks Evaluation Advantages Handle errors well Graceful degradation Can learn novel solutions Learning during play might be possible Disadvantages Can t understand how or why the learned network works Examples must match real problems Need many examples Learning takes lots of processing 32

References Mitchell: Machine Learning, McGraw Hill, 1997. Russell and Norvig: Artificial Intelligence: A Modern Approach, Prentice Hall, 1995. Hertz, Krogh & Palmer: Introduction to the theory of neural computation, Addison-Wesley, 1991. Cowan & Sharp: Neural nets and artificial intelligence, Daedalus 117:85-121, 1988. 33