A New Mathematical Algorithm to Facilitate Adaptive Tactics for Changing Threat Behavior

Similar documents
Toward a Better Understanding of Complexity

Cellular automata are idealized models of complex systems Large network of simple components Limited communication among components No central

Cellular Automata. ,C ) (t ) ,..., C i +[ K / 2] Cellular Automata. x > N : C x ! N. = C x. x < 1: C x. = C N+ x.

Cellular Automata. and beyond. The World of Simple Programs. Christian Jacob

Fractal structure in the Chinese yuan/us dollar rate. Abstract. Grande Do Sul, Brazil

Asynchronous random Boolean network model based on elementary cellular automata

II. Spatial Systems. A. Cellular Automata. Structure. Cellular Automata (CAs) Example: Conway s Game of Life. State Transition Rule

Adversarial Sequence Prediction

biologically-inspired computing lecture 6 Informatics luis rocha 2015 INDIANA UNIVERSITY biologically Inspired computing

II. Spatial Systems A. Cellular Automata 8/24/08 1

Fractal Geometry Time Escape Algorithms and Fractal Dimension

Chapter 2 Simplicity in the Universe of Cellular Automata

biologically-inspired computing lecture 5 Informatics luis rocha 2015 biologically Inspired computing INDIANA UNIVERSITY

Modelling with cellular automata

Complex Systems Theory

Chapter 8. Fractals. 8.1 Introduction

Lecture Start

Introduction to Dynamical Systems Basic Concepts of Dynamics

Biological Systems Modeling & Simulation. Konstantinos P. Michmizos, PhD

Quantitative Description of Robot-Environment Interaction Using Chaos Theory 1

The Pattern Recognition System Using the Fractal Dimension of Chaos Theory

Proving Completeness for Nested Sequent Calculi 1

Learning Cellular Automaton Dynamics with Neural Networks

Chapter 2 Chaos theory and its relationship to complexity

Grand design, intelligent designer, or simply God: Stephen Hawking and his hoax* 3 sep 2010

CSC236 Week 3. Larry Zhang

Top-down Causality the missing link in our physical theories

What is Chaos? Implications of Chaos 4/12/2010

A Novel Chaotic Neural Network Architecture

Fundamentals of Dynamical Systems / Discrete-Time Models. Dr. Dylan McNamara people.uncw.edu/ mcnamarad

Creative Genomic Webs -Kapil Rajaraman PHY 498BIO, HW 4

Effects of Interactive Function Forms in a Self-Organized Critical Model Based on Neural Networks

II. Cellular Automata 8/27/03 1

Chapter 1. Introduction

biologically-inspired computing lecture 12 Informatics luis rocha 2015 INDIANA UNIVERSITY biologically Inspired computing

Shannon Information (very briefly!) Lecture 4. Maximum and Minimum Entropy. Entropy. Entropy of Transition Rules. Entropy Examples

Motivation. Evolution has rediscovered several times multicellularity as a way to build complex living systems

Fractals: A Mathematical Framework

Reconstruction Deconstruction:

Controlling chaos in random Boolean networks

Networks and Systems Prof. V. G. K. Murti Department of Electrical Engineering Indian Institute of Technology, Madras

Conceptual Physics Electrostatics and Static Electricity Notes and Worksheets

Evolutionary Dynamics and Extensive Form Games by Ross Cressman. Reviewed by William H. Sandholm *

THE MEAN-MEDIAN MAP MARC CHAMBERLAND AND MARIO MARTELLI

Evolutionary Games and Computer Simulations

Quiz 3 Reminder and Midterm Results


The Nature of Computation

Resource and Activity Pack. Discussion questions Comprehension exercise Lesson plans Activities

A Very Brief and Shallow Introduction to: Complexity, Chaos, and Fractals. J. Kropp

Discrete Tranformation of Output in Cellular Automata

CMSC 425: Lecture 11 Procedural Generation: Fractals and L-Systems

Extension of cellular automata by introducing an algorithm of recursive estimation of neighbors

Classification of Random Boolean Networks

Computational Tasks and Models

On Computational Limitations of Neural Network Architectures

Introduction. Spatial Multi-Agent Systems. The Need for a Theory

GRAVITY SIMULATOR: Our Solar System

TOWARDS COMPLEXITY OF SCIENCE

Discrete evaluation and the particle swarm algorithm

Cell-based Model For GIS Generalization

Alpha-Beta Pruning: Algorithm and Analysis

Introduction The Fibonacci numbers are traditionally described as a sequence F. = 0 and τ τ 1. =. To obtain the Fibonacci numbers as a

Power Series Solution of Non-linear Partial Differential Equations

An Approximate Entropy Based Approach for Quantifying Stability in Spatio-Temporal Data with Limited Temporal Observations

Self-organized Criticality and its implication to brain dynamics. Jaeseung Jeong, Ph.D Department of Bio and Brain Engineering, KAIST

Coalescing Cellular Automata

Optimal Shape and Topology of Structure Searched by Ants Foraging Behavior

Neural Networks. CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington

September 16, 2004 The NEURON Book: Chapter 2

Designing and Evaluating Generic Ontologies

V -variable fractals and superfractals

Energy relationship between photons and gravitons

Balancing and Control of a Freely-Swinging Pendulum Using a Model-Free Reinforcement Learning Algorithm

Fractals. Krzysztof Gdawiec.

Asimple spring-loaded toy that jumps up off

Reservoir Computing and Echo State Networks

Learning in State-Space Reinforcement Learning CIS 32

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence

Math Models of OR: Branch-and-Bound

Vertex Routing Models and Polyhomeostatic Optimization. Claudius Gros. Institute for Theoretical Physics Goethe University Frankfurt, Germany

FRACTALS, DIMENSION, AND NONSMOOTH ANALYSIS ERIN PEARSE

Learning Computer-Assisted Map Analysis

In biological terms, memory refers to the ability of neural systems to store activity patterns and later recall them when required.

Complex Dynamical Systems

The grand theory of astrology

Economy and Application of Chaos Theory

ELECTRICITY. This chain is similar to the fire fighter's bucket brigades in olden times. But

0. Introduction 1 0. INTRODUCTION

AN INTRODUCTION TO FRACTALS AND COMPLEXITY

UNI 101z November 9, 2004 Population Dynamics and Chaos: A Descriptive Introduction Thomas Caraco Department of Biological Sciences 1.

BINARY MORPHOLOGY AND CELLULAR AUTOMATA

Toward the Analysis and Synthesis of Varied Behavior of Complex Systems

P The Entropy Trajectory: A Perspective to Classify Complex Systems. Tomoaki SUZUDO Japan Atomic Energy Research Institute, JAERI

CSCI3390-Lecture 18: Why is the P =?NP Problem Such a Big Deal?

Max Planck, Nobel Prize in Physics and inventor of Quantum Mechanics said:

1 The Nature of Science

AIBO experiences change of surface incline.

Chapter 1 Introduction

Evolutionary computation

Transcription:

Facilitate Adaptive Tactics for Changing Threat Behavior Mr. Joe Schaff Naval Air Systems Command 22347 Cedar Point Rd., Unit 6 Code 4.1.11.2, Bldg. 2185, Suite 2190-A1 Patuxent River, MD 20670 USA Email: schaffjb@navair.navy.mil INTRODUCTION This is a new mathematical algorithm that allows a system to determine the behavior of an adversary from a small set of reference points. It also adapts to changes in the adversary s behavior, and can predict these in an intuitive-like manner. This is particularly important when the adversary has unconventional or dynamically changing rules of engagement, such as those used by terrorists. This paradigm has evolved out of real world models of human intelligence, i.e. by using what we know aspects of genetics and neurobiology make up human beings, therefore a good starting place appears to be incorporating genetic algorithms to generate neural networks. Shortcomings to this method such as protracted convergence times, etc. lead to the conclusion that a better method needs to be found. By inspection, the method also lacks fundamental aspects of real-world phenomena, specifically the chaotic nature that emerges from real-world complexity. The issues that this new paradigm attempts to address include intuitive reactions based on very limited a priori knowledge, and a rapidly adaptive system that works from hunches or smart guesses. The reason for this new approach is that changing and unpredictable threat behavior e.g. a terrorist needs to be treated by a different methodology than the classical, or even the use of genetic algorithms to generate neural nets. Small amounts of known data may have to be sufficient to determine what the threat will do. Rapid threat tactics changes must be met with rapid decisions and responses. A better method of abstraction can significantly improve results. A brief outline of this method follows. By mapping the few known points that we have obtained into a geometric region of non-integer dimension such as a fractal, we can generate intuitive behavior. This fractal has to be adaptive and a self-generating dynamical system, and will be covered in greater detail further on in this paper, as well as justifications for this new paradigm with respect to learning and physical phenomena. Information on threats has to be acquired from distributed networked sensors and all available sources in real-time, including other autonomous agents, to create net-centric learning. Paper presented at the RTO SCI Symposium on Multi-Platform Integration of Sensors and Weapons Systems for Maritime Applications, held in Norfolk, USA, 21-23 October 2002, and published in RTO-MP-097(I). RTO-MP-097(I) 27-1

Net-centric learning allows the system to adapt to dynamic changes in the signature of the threat in real-time, and reframe the baseline for the threat. This is accomplished by dynamically morphing the fractal to incorporate new data as knowledge is acquired, resulting in the new fractal representing the sum total of both old and new data. Intuition results from the new map encompassing unexplored points that represent future threat actions. It is yet to be determined just how effective this is compared with old methods, although it evidently converges more rapidly than conventional methods. LEARNING How do we learn? Much of what we learn is by analogy we see a new object in our environment and may treat it the same as a known object that we consider similar to it. We use the original as a template for modeling our interactions with the new one. More observant individuals then go on to note the differences between the new object and one we had previously encountered. This indicates that a large portion of our acquired knowledge is based on self-similar relationships, and their respective transformations. In order to model complex human behavior, it stands to reason that a good start would be to use nature as a template since this is an already known working system. Since the brain consists of networks of interconnected neurons, we could start with neural network paradigms. Some of the limitations of neural networks have become obvious over time, and may be due to modeling issues, such as the fidelity of the paradigms (e.g. neurons typically communicate via PCM, which allows an additional dimension of information to be encoded into internodal transactions, and this aspect is usually left out of typical paradigms). To bypass the limitations it may be necessary to generate unique neural nets, so going back to our paradigms from nature the next step in the process would be to incorporate Genetic Algorithms (GA) to grow optimized neural nets. Much good work has been done on this, however the underlying problem that has always existed is that the search space is too large for convergence in a reasonable time. If the dynamics of the system to be modeled comes into play, then the convergence is an even longer process. At this point, we need to go back to the fundamentals of nature i.e. the driving factors for why things differentiate the way that they do. Many aspects of nature are represented by complexity and chaos theory, and have affinities for particular behaviors or structures due to both regular and strange attractors. We could generate a paradigm that would partially work based on probability it would be a good generalization that encompasses a problem without really understanding the nature of it. Specific analytical definitions, however, are much better. Instead of treating all of nature as a random distribution of permutations, consider some permutations to have a weighted value, or specifically the ones where a local minimum exists would be more likely to happen than those elsewhere. How do you incorporate real world mappings / weighted values into the GAs? Many real-world phenomena are self-similar i.e. a smaller section of the whole can be scaled iteratively to represent the whole. Examples include trees, broccoli, cauliflower, and their respective branches, planets with multiple moons could scale to a solar system with multiple planets, coastlines can scale both upwards and downwards, and many other phenomena. These and many other phenomena can be represented by recursive generating functions iterated function systems (IFS), in particular fractals. Fractals have been successful paradigms for real-world phenomena. It seems that by using a fractal of some sort to encompass a-priori knowledge, and build this into 27-2 RTO-MP-097(I)

the GA-generating-NN infrastructure, we should be able to have a better degree of convergence. This would be one approach, where the random generator in the GA could be replaced with the IFS. What about dynamical systems in general? That is indeed the nature of our world. At this point we will consider another approach in which we simply use some sort of adaptive stand-alone IFS to generate its unique patterns, and let these evolve into a representation of an ordered structure. There has to be a way to have the fractal adapt, morph and distort to include new points as they are acquired ( learned ). Since fractals are self-similar, their elementary parts can each represent an aspect of the real-world. A few known points can define the fractal, which consists of the set of known and unknown points hence a little knowledge can be used to predict and guess unknown behaviors. This is analogous to the way we can use a few known facts to determine some future behavior, which we call a hunch or intuitive guess. At this point we have a system (or agent) that can generate instinctive AND intuitive behavior. This rubber fractal can learn and adapt, and, by its nature, has intuitive properties. The inflection points on this fractal would be related by analogy to real world events. Again, this could be applied to GAs which would converge much faster and create optimized neural nets that would adapt and have dynamical (intuitive) responses, or it could be allowed to develop its own unique patterns that would probably function in a similar manner to the GA and generate something functionally equivalent to the optimized neural nets. This would be done without having to go through the formality of using GAs to generate neural nets. Now we have a paradigm that allows a machine to behave [more] like a human being. How do we make this fractal paradigm? The mathematics has been developed to do this, and its behavior is different from typical fractals it is a non-predetermined parametric random (NPPR) iterated function system [IFS]. Points can be at arbitrary locations on the surface, and still allow the solutions to converge in any random order. This is a characteristic of a self-organizing system. SELF-ORGANIZING SYSTEMS AND ENTROPY Cellular automata theory offers a number of examples of self organizing systems. One of the most common examples of this is Conway s game of Life. These automata are defined by simple rules of interaction (much simpler than in real-world situations) and are capable of generating many modes ranging from static to cyclic and even true chaotic system states. This brings to mind the questions as to whether a system which starts in an arbitrary (random) state can end up producing a repeatable highly ordered end state. Are we then violating the second law of Thermodynamics are not the rules regarding entropy always true? Mostly true would be more accurate, and dependent upon the semantics. A very simple example of this can be produced using Boolean nets. Stuart Kauffman (see bibliography) who has been investigating these for several decades, found that Boolean nodes that have less than four connections always result in a stable ordered end state, but four or more connections results in a chaotic state. Let us step aside from the semantic arguments of whether entropy rules are always true, and consider that some random systems can end up in a stable highly ordered state. Conceptually, the easiest way to envision this is to consider the example of crystal formations, where an amorphous liquid loses energy to form a highly ordered structure a crystal. This is clearly due to the nature of the system the interaction of the molecular forces inherent in the substance that crystallizes. The crystal is clearly the lowest energy formation of the substance. A similar analogy can be applied to stable states in cellular automata, which are closely allied with the Boolean nets previously mentioned, and other random systems with certain constraints. This is also addressed in Stephen Wolfram s latest publication (see bibliography) as well as the seeming violation of rules on entropy. These stable states result when we approach a minimum (either local or absolute) on a particular mapping of points or elements, whereby the mapping of the points can vary only slightly within the constraints of the minima. These energy mappings on a Lyapunov energy map can be used to show convergence to a solution. RTO-MP-097(I) 27-3

The stability resulting from complexity has been investigated by Glass and Mackey (see bibliography) about fifteen years ago, and in collaboration with others [Goldberg, et al.] were able to show the chaotic aspect of cardiac rhythms. Particularly interesting results were the conclusion that the less chaotic the cardiac waveform, the greater the chance of a massive cardiac failure, resulting in an indicator of potential problems. Another result was the hypothesis that the cardiac waveform had a particular fractal aspect to it, indicating yet again another role of an IFS in nature. An interesting and unique mathematical function that I have been investigating is based on a random set of points that are generated with certain constraints. The system is called a Non-Parametric Polynomial Random Iterated Function System (NPPR IFS). Like many IFS systems, iterating through will generate highly ordered patterns in many instances, although some still appear to be truly random sets. The system is dynamical, and the sub-elements are affine transformations of self-similar objects. The nature of contractive affine objects is that they will always converge, that is if a particular value is less than one. In the example program that I have used to generate the pictures included at the back of this paper, the denominator in the generating function is always greater than one meaning that for a function f(n) = a/n where a <=n, f(n) will always be less than one. The code is simple, with two lines defining the function s primitive for generating the patterns from a random number generator. This system starts randomly with only a few data points, and due to its dynamical nature it adapts to new points outside the realm of the initial system. Since it is an IFS, hence a contractive affine transformation, it converges to a solution set. This allows only a few points of information to define complex patterns, allowing the system to make hunches or intuitive guesses from sparse information. The system comes close to having intuitive abilities, allowing it to form survival tactics against a terrorist threat, of which it would most likely has very limited knowledge. Additional applications could be in the area of steganography, and biometrics, among others. Some of the interesting patterns discovered so far that are produced by this algorithm appear at the end of this paper. There are a tremendous number of as-yet-unknown patterns that the algorithm could produce. By incorporating net-centric sensor data into the system, the NPPR IFS could incorporate additional data points into its map, and thus modify (adapt) itself to a dynamically changing environment, or changing threat behavior. Since the system is contractive affine, it will still encompass the previously discovered data points and contain future data points within its boundaries. Networked inter-agent communication would allow shared knowledge to enhance and update each agent s fractal knowledge map, and will by its self-similar nature create a meta-knowledge map, or super-intelligent agent due to the collage theorem [ref. Barnsley]. CONCLUSION If we have many sets of data generated randomly with constraints, in particular using the NPPR IFS, some highly ordered structures will emerge from it. The same concept may by analogy describe how thoughts are formed, given the fact that neurons in the brain fire in a random pattern, and regular wave-like and impulse patterns emerge from these seemingly chaotic neural interactions. It is a most sobering thought that these sporadic neural firings result in some of the most creative and profound thought processes that we human beings have. These thought processes can also be triggered by external stimuli, such as a probe touching a particular nerve in the brain, or another thought process cascading into an existing one (e.g. that reminds me of... ), resulting in an even larger cascade of firings producing highly ordered thought on a subject. In essence, we are revisiting what seems to be a contradiction to the second law of thermodynamics, whereby we get order out of randomness with no additional applied energy. 27-4 RTO-MP-097(I)

It would be fascinating to watch the interactive dynamics of a network-centric society of agents constructed from this mathematical paradigm. Perhaps the lessons learned from such observation could be used to develop novel economic and political paradigms as well. BIBLIOGRAPHY Chaos and Fractals New Frontiers Of Science, H. Peitgen, H. Jurgens and D. Saupe; Springer-Verlag, 1992. Creating Human-Like Behavior in a UAV: A Counter-Terrorism Model, J. Schaff; Proceedings of the Fourteenth Annual Software Technology Conference, May 2002. A New Kind Of Science, S. Wolfram; Wolfram Media, Inc., 2002. Chaos/Complexity: Causing Software Failures and Creating an Intelligent Adversary Pilot, J. Schaff; Joint Avionics Weapons Systems, Sensors and Simulation Symposium, July 2001. At Home in the Universe, Stuart Kauffman; Oxford University Press, 1995. From Clocks to Chaos, L. Glass and M. Mackey; Princeton University Press, 1988. On a Mechanism of Cardiac Electrical Stability: The Fractal Hypothesis., A. L. Goldberger, V. Bhargava, B.J. West anda.j. Mandell; Biophysics Journal 48:525-28, 1985. Non-Predetermined Parametric Random IFS, J. Schaff; currently being reviewed for publication. Fractals Everywhere, M.F. Barnsley; Academic Press, 1988. Thanks to: Dr. M. Khadivi, Professor of Mathematics, Jackson State University, Jackson, MS USA, for assistance in refining the mathematical basis of the NPPR Algorithm. RTO-MP-097(I) 27-5

NPPR IFS SAMPLE PATTERNS Figure 1: Randomly Generated Sierpinski Gasket. Figures 2 & 3: More Structured Patterns from Random Points. 27-6 RTO-MP-097(I)