Estimation of Propagating Phase Transients in EEG Data - Application of Dynamic Logic Neural Modeling Approach

Similar documents
Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations


EEG- Signal Processing

Hierarchy. Will Penny. 24th March Hierarchy. Will Penny. Linear Models. Convergence. Nonlinear Models. References

Chaotic Neurodynamics for Autonomous Agents

Bursting and Chaotic Activities in the Nonlinear Dynamics of FitzHugh-Rinzel Neuron Model

NONLINEAR BRAIN DYNAMICS AND MANY-BODY FIELD DYNAMICS 1

A Multivariate Time-Frequency Based Phase Synchrony Measure for Quantifying Functional Connectivity in the Brain

Consider the following spike trains from two different neurons N1 and N2:

Part 2: Multivariate fmri analysis using a sparsifying spatio-temporal prior

Several ways to solve the MSO problem

Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity

Parametric Techniques

Data Mining Part 5. Prediction

Artificial Neural Network and Fuzzy Logic

!) + log(t) # n i. The last two terms on the right hand side (RHS) are clearly independent of θ and can be

New Machine Learning Methods for Neuroimaging

Parametric Techniques Lecture 3

The homogeneous Poisson process

The Mixed States of Associative Memories Realize Unimodal Distribution of Dominance Durations in Multistable Perception

UC Berkeley UC Berkeley Previously Published Works

HMM and IOHMM Modeling of EEG Rhythms for Asynchronous BCI Systems

Natural Image Statistics

Independent Component Analysis. Contents

High-dimensional geometry of cortical population activity. Marius Pachitariu University College London

Linear discriminant functions

80% of all excitatory synapses - at the dendritic spines.

Two Decades of Search for Chaos in Brain.

MODULE -4 BAYEIAN LEARNING

Recipes for the Linear Analysis of EEG and applications

Perceiving Geometric Patterns: From Spirals to Inside Outside Relations

Modelling stochastic neural learning

Spatial Source Filtering. Outline EEG/ERP. ERPs) Event-related Potentials (ERPs( EEG

Spike-Frequency Adaptation: Phenomenological Model and Experimental Tests

Lecture 4: Feed Forward Neural Networks

+ + ( + ) = Linear recurrent networks. Simpler, much more amenable to analytic treatment E.g. by choosing

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Adaptation of nonlinear systems through dynamic entropy estimation

Synchrony in Neural Systems: a very brief, biased, basic view

A MULTIVARIATE TIME-FREQUENCY BASED PHASE SYNCHRONY MEASURE AND APPLICATIONS TO DYNAMIC BRAIN NETWORK ANALYSIS. Ali Yener Mutlu

Brief Introduction of Machine Learning Techniques for Content Analysis

STA 414/2104: Lecture 8

Signal Modeling Techniques in Speech Recognition. Hassan A. Kingravi

On the use of Long-Short Term Memory neural networks for time series prediction

p(d θ ) l(θ ) 1.2 x x x

arxiv: v3 [q-bio.nc] 17 Oct 2018

University of California

Reading Group on Deep Learning Session 1

Finding a Basis for the Neural State

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

Linear Models for Classification

Neural Networks 1 Synchronization in Spiking Neural Networks

How to do backpropagation in a brain

arxiv: v1 [q-bio.nc] 6 Jun 2012

COM336: Neural Computing

Searching for Nested Oscillations in Frequency and Sensor Space. Will Penny. Wellcome Trust Centre for Neuroimaging. University College London.

Simultaneous Multi-frame MAP Super-Resolution Video Enhancement using Spatio-temporal Priors

Linear Regression, Neural Networks, etc.

Pattern Classification

Recursive Generalized Eigendecomposition for Independent Component Analysis

Reconnaissance d objetsd et vision artificielle

Nonlinear reverse-correlation with synthesized naturalistic noise

Learning Gaussian Process Models from Uncertain Data

Bayesian probability theory and generative models

Iterative face image feature extraction with Generalized Hebbian Algorithm and a Sanger-like BCM rule

Dynamical Constraints on Computing with Spike Timing in the Cortex

arxiv:physics/ v1 [physics.bio-ph] 19 Feb 1999

Lecture 9. Time series prediction

Dynamic Modeling of Brain Activity

Comparing integrate-and-fire models estimated using intracellular and extracellular data 1

MIXED EFFECTS MODELS FOR TIME SERIES

Statistical and Learning Techniques in Computer Vision Lecture 2: Maximum Likelihood and Bayesian Estimation Jens Rittscher and Chuck Stewart

STATISTICAL MECHANICS OF NEOCORTICAL INTERACTIONS: EEG EIGENFUNCTIONS OF SHORT-TERM MEMORY

Ch 4. Linear Models for Classification

EEL 851: Biometrics. An Overview of Statistical Pattern Recognition EEL 851 1

Linear & nonlinear classifiers

Forecasting Wind Ramps

Optimization Methods for Machine Learning (OMML)

Sections 18.6 and 18.7 Artificial Neural Networks

STA 4273H: Statistical Machine Learning

The Bayesian Brain. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. May 11, 2017

Neural networks. Chapter 20. Chapter 20 1

A Three-dimensional Physiologically Realistic Model of the Retina

η 5 ξ [k+2] η 4 η 3 η 1 ξ [k]

Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

Supporting Online Material for

An Introductory Course in Computational Neuroscience

Unsupervised Learning with Permuted Data

Introduction to Biomedical Engineering

Artificial Neural Networks

Neutron inverse kinetics via Gaussian Processes

Encoding or decoding

Recognition Performance from SAR Imagery Subject to System Resource Constraints

Linear Regression and Its Applications

Neural Networks and the Back-propagation Algorithm

J. Marro Institute Carlos I, University of Granada. Net-Works 08, Pamplona, 9-11 June 2008

EM-algorithm for Training of State-space Models with Application to Time Series Prediction

State-Space Methods for Inferring Spike Trains from Calcium Imaging

Title. Author(s)Fujii, Hiroshi; Tsuda, Ichiro. CitationNeurocomputing, 58-60: Issue Date Doc URL. Type.

Transcription:

Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August 12-17, 2007 Estimation of Propagating Phase Transients in EEG Data - Application of Dynamic Logic Neural Modeling Approach Robert Kozma, Ross W Deming, and Leonid I Perlovsky Abstract Dynamic Logic (DL) approach establishes a unified framework for the statistical description of mixtures using model-based neural networks. In the present work, we extend the previous results to dynamic processes where the mixture parameters, including partial and total energy of the components are time-dependent. Equations are derived and solved for the estimation of parameters which vary in time. The results provide optimal approximation to a broad class of pattern recognition and process identification problems with variable and noisy data. The introduced methodology is demonstrated on the example of identification of propagating phase gradients generated by intermittent fluctuations in nonequilibrium neural media. I. INTRODUCTION Experimental studies indicate that intermittent synchronization across large cortical areas provides the window for the emergence of meaningful cognitive activity in animals and humans (Freeman et al., 2003; Freeman, 2004; Stam, 2005). In neural tissues, populations of neurons send electric currents to each other and produce activation potentials observed in EEG experiments. The synchrony among neural units can be evaluated by comparing their activation levels as the function of time. While single unit activations have large variability and do not seem synchronous, the activations of neural groups often exhibit apparent synchrony. What distinguishes brain dynamics from other kinds of dynamical behavior observed in nature is the filamentous texture of neural tissue called neuropil, which is unlike any other substance in the known universe (Freeman, 1991). Neural populations stem ontogenetically in embryos from aggregates of neurons that grow axons and dendrites and form synaptic connections of steadily increasing density. At some threshold the density allows neurons to transmit more pulses than they receive, so that an aggregate undergoes a state transition from a zero point attractor to a non-zero point attractor, thereby becoming a population. Experimental studies on brain waves at the level of neural populations using EEG techniques gave rise to new theories (Freeman, 1991; Freeman, Kozma, 2000). Multiple electrode recordings in the olfactory bulb indicated that odors are encoded as complex spatial and temporal patterns in the bulb. Based on these observations, a chaos theory of sensory perception has been proposed (Skarda and Freeman, 1987; Freeman, 1991). In this approach, the state variables of the Robert Kozma is with Dept. of Computer Science, 373 Dunn Hall, The University of Memphis, Memphis, TN 38152, rkozma@memphis.edu; Leonid Perlovsky and Ross Deming with Air Force Research Laboratory, Sensory Directorate, Hanscom AFB, MA 01371, leonid.perlovsky@hanscom.af.mil. The work is partially supported by National Research Council through an award to Robert Kozma. brain in general, and the olfactory bulb in particular, traverse along complex chaotic trajectories which constitute a strange attractor with multiple wings. Spatio-temporal behavior of the trajectories and their behavioral significance is studied, e.g., in (Kozma, Freeman, 2002). External stimuli constrain the trajectories to one of the attractor wings, which are identified as stimulus specific patterns. Once the stimulus disappears, the dynamics returns to the unconstrained state until the next stimulus arrives. Identification of spatio-temporal spontaneous and inputinduced synchronization-desynchronization events in cortical populations poses a difficult problem due to the noisy and transient character of the processes involved. We propose to use DL-based neural networks for solving this identification task. Concepts and instincts are basic building blocks of in this theory. Concepts are formed as models of sensory observations on the external world, while instincts correspond to interoceptory percepts. DL neural networks provide the machinery operating through the knowledge instinct to achieve iterative converge toward an emotionally balanced final state. Dynamic logic has been used very successfully in solving difficult pattern recognition problems (Perlovsky, 2001, 2006). In this paper we introduce the DP-based learning method for analyzing spatially distributed EEG distributions. First we derive the corresponding equations applicable to the case of time varying EEG transitions. Next we describe the algorithm to solve these equations and estimate the model parameters. We demonstrate the effectiveness of the proposed methodology using simulated data. Our results indicate the feasibility of the proposed methodology and the successful estimation of model parameters in problems with high noise level. Finally, the results are summarized and the directions of future work are outlined, toward the analysis of actual multi-channel EEG data. II. OVERVIEW OF PHASE TRANSITIONS OBSERVED IN EEG DATA EEG measurements confirm the presence of the selfsustaining, randomized, steady state background activity of brains. This background activity is the source from which ordered states of macroscopic neural activity emerge, like the patterns of waves at the surfaces of deep bodies of water. Neural tissues, however, are not passive media, through which effects propagate like waves in water (Freeman and Kozma 2000). The brain medium has an intimate relationship with the dynamics through a generally weak, subthreshold interaction of neurons. 1-4244-1380-X/07/$25.00 2007 IEEE

Fig. 1. Phase differences across space and time measured by Hilbert method. There are long periods of small phase differences, interrupted by short transients with high phase dispersion. The EEG shows that neocortex processes information in frames like a cinema, at the approx. theta rate (7-12 Hz). The perceptual content is found in the phase plateaus from rabbit EEG. The phase jumps show the shutter (Freeman, 2004). Fig. 2. Illustration of the onset of phase cones in the rabbit olfactory bulb, using an array of 8x8 intracranial electrodes. Schematic illustration of the observed amplitude modulation patterns are shown on the left two panels. The right panel shows the location of phase cones detected during the experiments (Barrie et al, 1996). The brain activity exhibits high level of synchrony across large cortical regions. The synchrony is interrupted by episodes of desynchronization, when propagation of phase gradients in the activation of local populations can be identified. The spatially ordered phase relationship between cortical signals is called phase cone. Experiments show that phase gradients propagate at velocities up to 2 m/s and can cover cortical areas with size of several cm 2 (Barrie et al., 1996; Freeman et al., 2003). A typical example of EEG measured at rabbit cortex is shown in Fig. 1. Coordinated analytic phase differences (CAPD)are calculated in the gamma band (20-50 Hz) with 8x8 0.79 mm spacing 5.6x5.6 mm array digitized at 2 ms intervals. The EEG shows that neocortex processes information in frames like a cinema. The perceptual content is found in the phase plateaus of EEG; similar content is predicted to be found in the plateaus of human scalp EEG. The phase jumps show the shutter (Freeman, 2004). Estimation of parameters of the phase relationships is a very difficult problem. Phase differences are typically masked by high noise level due to the cortex background activity. Phase cones can appear with positive and negative phase lags, corresponding to explosive and implosive transitions in the cortical spatio-temporal dynamics. Figure 2 illustrates the distribution of the amplitude modulation patterns (AM) in the gamma band which carry information on the subject cognitive activity. These experiments have been performed on the rabbit s olfactory cortex using intracranial array of 8x8 electrodes. The experiments show that several phase cones may simultaneously coexists in the given spatio-temporal measurement window. The present results have been obtained by a properly tuned multi-variable regression technique, which selects the dominant phase cone at a give time interval and conducts parameter estimation for the dominant cone (Barrie et al, 1996). The fitting technique applied in the above reference has been used in various experiments and it produced satisfactory results. It is clear, however, that this estimation is a very tedious procedure. Moreover, it obtains the estimated parameters for the dominant cone only. Clearly, a more advanced method of parameter estimation, especially in the case of overlapping cones, which persist only for a relative short time period of 100 ms or less, would be very desirable. In this paper we extend the dynamic logic methodology to transient mixture processes, which will make it suitable to estimate parameters of the EEG phase cones. III. DYNAMIC LOGIC EQUATIONS A. Maximum Likelihood Estimation Framework The statistical method for maximum likelihood estimation presented here is based on the Dynamic Logic formulation; for details, see (Perlovsky, 2001). Let Θ denote the set of parameters of a model, s is the total number of parameters: Θ = [Θ 1, Θ 2,..., Θ s ]. (1) The set of available data is given by X n. Here n is the data index in the space and time. Let us consider an system having N spatial points, and for each point of space we measure T time instances, n = 1,..., N T. In the context of EEG frame identification, this means we have T frames of phase patterns, each having size N. The goodness of the fit between the model and the data is often described by the log-likelihood (LL) function (Duda and Hart, 1973). Various forms of of LL can be defined. Here we introduce the Einsteinean LL function LL E following Perlovsky (2001): LL E = n [N,T ] p 0 (X n )ln r k (t)p(x n Θ k ). (2) k=1

The notations are as follows: the model is assumed of having K mixture components, each is characterized by its own probability density function p(x n Θ k ), where Θ k is the parameter set of the k-th component of the mixture model. Here p 0 (X n ) is the distribution of the data points to be modeled. r k denotes the relative weight of component k. The following normalization condition is satisfied : k=1 t=1 r k (t) = E, (3) where E is the total power of all components across all times, during the experiment of length T. The parameters of mixtures are estimated by maximizing the log-likelihood in Eq.2. Next we derive the conditions for each parameter by calculating the partial derivative of LL E with respect to the corresponding variables. We use the standard method of Lagrangian multipliers. By introducing unknown coefficient λ, the expression F to be minimized writes : min F = LL E + λ(e t=1 k=1 r k (t)). (4) The time-dependence of r k (t) can obey various conditions, e.g., linear regression : r k (t) = r k0 + r k1 t, (5) higher-order polynomial, or an exponential relationship of the following from : r k (t) = r k0 exp(r k1 t). (6) For the time being, we derive a general condition for any expression of the time-dependence of r k (t). Later on, when considering a given problem, we will specify the form of r k (t). Let us take the derivative of F with respect to parameters r ki : 0 = F = LL λ t=1 k=1 r k (t). (7) After some algebra, we arrive at the following expression for unknown parameter λ: 1 λ = T r k(t)/ N,T p 0 (X n )P (k n) r k(t) /r k (t). (8) Here we defined association probability P (k n) as the probability that a given sample belongs to class k: P (k n) = r k (t)p(x n Θ k ) K k=1 r k(t)p(x n Θ k ). (9) At this point, the actual functional form of r k (t) is substituted. We consider the case of exponential growth or decay of the component, see Eq.6. After a series of transformations, we arrive at the conditions: T r k1 e r k 1 T e r k 1 T + 1 r k1 (e r k 1 T 1) r k0 = = < t > < 1 >, (10) r k1 (e r k 1 T 1) < 1 >. (11) The bracket notation < > has been introduced as a shorthand for the following expression: < >= n [N,T ] p 0 (X n )P (k n). (12) Eq. 10 is a transcendental equation. Its solution can be determined numerically. It is straightforward to show that Eq. 10 has a unique solution in the parameter range of interest to our problem. For convenience, it is easy to generate an look-up table with various values of the coefficient, so the solution of this equation is not computationally intensive. We mention this fact here, as at a later stage we need to solve another set of equations iteratively. In that case we can not have such a shortcut as in Eq. 10. Once r k1 is determined as the solution of the mentioned transcendental equation, its value should be substituted to Eq. 11 to obtain r k0. At the next step, the remaining model parameters shall be determined. We assume that the probability density function of each component obeys a normal distribution over the 2- dimensional array of X = [x, y] coordinates: 1 p(x Θ k ) = 2π C k (t) e( 1 2 (X M k) T C 1 k (t)(x M k)) (13) Here M k is the mean of the Gaussian distribution of the k- th component, while C k (t) is the time dependent covariance matrix. The assumption of normal distributions simplifies the treatment ahead, but other distributions can be used as well, as required by the nature of the process under investigation. According to the modeled process, we assume time dependence of the covariance only, while the mean values are fixed. This is consistent with the problem at hand. In EEG experiments, once the apex is determined as the point of initiation of the phase cone, its position is not changed. Only the extent of the phase cone varies as it evolves. It is possible to have explosion of phase cones, i.e., increasing size of the cone starting form an initial point. We also observe negative cones, i.e., implosions, when the phase cone converges to the apex from a broad initial state. Both explosions and implosions can be described with our model, by selecting the sign of the exponential evolution accordingly. The equation for M k is obtained by partial derivation of the log-likelihood equation Eq.2. After simple algebra, we arrive at the expression: M k = c 0k < X n > +c 1k < tx n > c 0k < 1 > +c 1k < t > (14) This is an explicit expression for the class mean values, using known c 0k and c 1k, k = 1,..., K. These constants

describe the temporal variation of the covariance matrix: C k (t) = 1 I, (15) c 0k + c 1k t where I is the 2 x 2 identity matrix. We introduce the notation c Rk = c 1k /c 0k. Note that the covariances monotonously decreasing for positive values of c 0k and c 1k and forward time, while they give increasing covariances for a range of negative c 1k values. Therefore, the model can describe both explosive and implosive evolution types. There is no closed-form solution for the covariance model coefficients, rather the following pair of transcendental equations must be solved simultaneously : < 1/(1 + c Rk t) > < t/(1 + c Rk t) > = < (X M k) T (X M k ) > < (X M k ) T (X M k )t > (16) c 0k = < 1/(1 + c Rk t) > < (X M k ) T (X M k ) > (17) An iterative learning procedure will be used to determine the model parameters based on the above equations. Namely, we start with some initial parameter values. We use the above adaptive equations to update r k1, r k0, M k, and c 1k, c 0k parameters, k = 1,..., K. The iteration is terminated typically after a given number of steps (typically 100 steps), or until a preset convergence criterion is satisfied. These model parameters can be considered weights, and the DL-based iterative method describes a model-based neural network learning process (Perlovsky et al, 1997; Perlovsky, 2001). In the next section, results of parameter estimation are introduced based on the procedure given above. IV. ESTIMATION OF PARAMETERS OF PHASE CONES USING DL The dynamic logic-based estimation procedure has been implemented in Matlab computing environment. It has been tested in various scenarios involving simulated phase cones. At first, the case of a single phase cone with low level of noise has been completed successfully. Next we produced more complicated simulated scenarios with multiple, overlapping phase cones and high level of noise. In these experiments the following parameters have been estimated: Location of the apex of each of the phase cones. The localization is conducted both in space and time. Namely, we estimated the x k and y k coordinates of the apex in 2-dimensional plane and the onset time instant t k, where k = 1,..., K, Dynamic parameters for the growth/collapse of the variance of each phase cone, c 0k and c 1k, k = 1,..., K, Energy evolution of each cone described by parameters r k0 and r k1, k = 1,..., K. Our results showed that all these parameters can be estimated successfully within a relatively few iterative learning step. Typically the results converged to the known value within less than 100 learning steps. The locations of the cones were the easiest to determine. This is not surprising, as DL Fig. 3. Final plots of estimating parameters of 4 phase cones; note: only the final frame is shown. is known to perform well for stationary or moving targets of fixed shapes, even in environment with high noise level and clutter (Perlovsky et al, 1997; Deming et al, 2006). The really challenging task has been the estimation of dynamic parameters, in particular the time-dependence of the covariances. Our method worked very well on that task, and it produced parameter values quickly and with good accuracy. Typically we obtained parameter values with an error of ±10% within less than 100 iteration steps. The most difficult task has been the estimation of the gradient of the variance growth/decrease. The system would be able to get relatively good estimation of all other parameters, like location and onset time, which would produce a reasonable estimation of the scenario, by assuming a certain effective variance for the observation period, without time dependence. This would mean a local minimum. Special attention should be paid to make sure the system does not get into local minimum. A general methodology proposed by DL approach is to start with an initially vague model with large variance. Then iteratively improve the model and continuously decrease the variance between model and observations. By selecting an appropriate variance adjustment schema, the iterations converge to model parameters close to the actual values and avoid getting trapped in local minima. An example of the estimation of the evolution of the phase cones is shown in Fig. 3. We simulated 100 frames of 128x128 pixel size each. Four different phase cones have been generated during the 100 frame period. The top two panels show the target image (top left) and the estimated image (top right). The bottom left frame represents the error of the model estimation. Note that the frames shown in Fig. 3 correspond to the final state of the frames after all 100 frames have been estimated. In this printed paper we can not show the movie with the time-dependent evolution. The lower right plot in Fig. 3 shows the convergence of the gradient of the variance c 1k. We observe that within about 20 iteration the

model converges to a gradient value close to the true value, which is 1 in this case. These results indicate the the introduced DL-based model produces parameter estimations very quickly within a reasonable accuracy of less than 10% in the case of simulated phase cone scenarios. V. CONCLUSIONS In this work a novel dynamic logic DL-based identification method is described to model transient processes. The method has been used successfully to estimate the parameters of evolving phase cones in simulated data. The introduced model produces parameter estimations very quickly within a reasonable accuracy of less than 10% in the case of simulated phase cone scenarios. As the next step we implement the methodology to study actual EEG sequences and identify parameters of phase transitions. The work in this field is in progress. DL is a cognitively-motivated model, describing the hierarchy of increasingly complex reflections of the environment in the internal world of the subject (Perlovsky, 2006). In the present work we used DL as a efficient pattern recognition tool to solve the problem of determining model parameters. We did not consider its cognitive aspects. In future studies it will be very interesting to apply this work to model the evolution of the perception-action cycle, which as we know, progresses through the sequences of phase transitions signified by the phase cones modeled in this work. Therefore, the present results concerning the identification of phase transitions in EEG can be used as building blocks in constructing models of the action-perception cycle at the high-level cognitive models of brains. Kozma, R., Freeman, W.J. (2002) Classification of EEG Patterns Using Nonlinear Neurodynamics and Chaos, Neurocomputing, 44-46, 1107-1112. Perlovsky, L.I. Schoendorf, W.H. Burdick, B.J. Tye, D.M. (1997) Model-based neural network for target detection in SAR images. IEEE Transactions on Image Processing, 6(1), 203-216. Perlovsky, L.I. (2001) Neural Networks and Intellect Using Model Based Approach, Oxford University Press, Oxford, U.K. Perlovsky, L.I. (2006) Toward physics of the mind: Concepts, emotions, consciousness, and symbols. Physics of Life Reviews, 3(1), 23-55. Skarda, C.A., W.J. Freeman (1987) How brains make chaos in order to make sense of the world, Behav. Brain Sci 10, 161 195. Stam, C.J. (2005) Nonlinear dynamical analysis of EEG and MEG: review of an emerging field. Clin. Neurophysiology, 116(10), 2266-301. VI. REFERENCES Barrie JM. Freeman WJ, Lenhart M. (1996) Modulation by discriminative training of spatial patterns of gamma EEG amplitude and phase in neocortex of rabbits. J. Neurophysiol. 76: 520-539. Duda, R.O., and Hart, P.E. (1973) Pattern Classification, Wiley and Sons, New York. Deming, R., Perlovsky, L.I. (2006). Robust detection and spectrum estimation of multiple sources from rotating-prism spectrometer images. Proc. SPIE, 6365. Freeman, W.J. (1991) The physiology of perception, Scientific American 264(2), 78-85. Freeman, W.J., Kozma, R. (2000) Local-global interactions and the role of mesoscopic (intermediate-range) elements in brain dynamics. Behavioral and Brain Sciences (2000), 23, 401. Freeman W.J., Burke B.C., Holmes M.D. (2003) Aperiodic phase re-setting in scalp EEG of beta-gamma oscillations by state transitions at alpha-theta rates. Hum Brain Mapping, 19: 248-272. Freeman W.J. (2004) Origin, structure, and role of background EEG activity. Part 2. Analytic phase. Clin. Neurophysiol. 115: 2089-2107.