Spatiotemporal analysis of optical imaging data

Size: px
Start display at page:

Download "Spatiotemporal analysis of optical imaging data"

Transcription

1 Available online at R NeuroImage 18 (2003) Spatiotemporal analysis of optical imaging data A. Sornborger, a, * C. Sailstad, b E. aplan, c and L. Sirovich a a Laboratory of Applied Mathematics, Biomathematical Sciences Division, Mt. Sinai School of Medicine, One Gustave L. Levy Place, New York, NY 10029, USA b Department of Neurobiology, Mt. Sinai School of Medicine, One Gustave L. Levy Place, New York, NY 10029, USA c Departments of Ophthalmology and Biophysics, Mt. Sinai School of Medicine, One Gustave L. Levy Place, New York, NY 10029, USA Received 9 July 2001; revised 3 July 2002; accepted 26 August 2002 Abstract Previous methods for analyzing optical imaging data have relied heavily on temporal averaging. However, response dynamics are rich sources of information. Here, we develop and present a method that combines principal component analysis and multitaper harmonic analysis to extract the statistically significant spatial and temporal response from optical imaging data. We apply the method to both simulated data and experimental optical imaging data from the cat primary visual cortex Elsevier Science (USA). All rights reserved. Introduction In optical imaging data, the intrinsic signal from cortical activity is often less than 0.1% of the background signal intensity. This signal is difficult to detect directly. A simple method to suppress this large background is to take differences of the average responses to two different stimuli. We will refer to this method as the standard difference. However, such standard differences often contain residual noise primarily from physiological sources and present other methodologic difficulties (Arieli et al., 1996; Sirovich and aplan, 2002). The physiological noise found in standard differences can, in principle, be reduced by averaging more data. However, more advanced methods that exploit regularities in the data, and that require fewer data than does standard differencing, have been developed in our laboratory and elsewhere. Principal component analysis (PCA) (Sirovich and Everson, 1992), independent component analysis (ICA) (Bell and Sejnowski, 1995), indicator functions (Everson et al., 1997), truncated differences (Gabbay et al., 2000), and other methods have all been used to improve the determination of the spatial signal from optical imaging data. * Corresponding author. address: ats@camelot.mssm.edu (A. Sornborger). Method of periodic stacking This article presents a philosophically different approach from that used in differencing methods and elaborates on techniques introduced by Mitra and Pesaran (1999). By pursuing the approximation that the response which we seek from a single stimulus is the same for many repeated measurements, we are able to extract both its spatial and its temporal forms. With our method we extract the statistically significant features of both the spatial layout and the time course of the response signal, including those response features that are common to all stimuli (the nonspecific response) and hence are lost by the standard difference procedure. Our method, which we call the periodic stacking method, is to assemble a time series of images from a set of measured responses to experimental presentations of the same stimulus. For instance, in the simulated data analyzed below, data from a single presentation consist of 110 frames taken with a CCD camera, with approximately 20 presentations of a given stimulus, randomly interspersed with other stimuli. Changing stimuli during an experiment avoids a possible conditioned response that might occur with repeated presentation of the same stimulus. Although the response to a single stimulus presentation is nonstationary, we may reasonably assume that, except for noise, the un /03/$ see front matter 2003 Elsevier Science (USA). All rights reserved. doi: /s (02)

2 A. Sornborger et al. / NeuroImage 18 (2003) derlying response to the same stimulus repeats from presentation to presentation. Thus, we use our freedom to reorder the data by sequentially assembling a series of measurements of the response to a stimulus as a long periodic, stationary signal. We extract the periodic signal from the noise with high-resolution spectral and harmonic analysis methods (see Appendix). By using the freedom to order experimental responses to stimuli as a periodic signal, we retain the advantages of stationary analysis methods while eliminating the problems that can arise from inducing a conditioned response with periodic stimulus presentation. Image (and other) data are often conveniently reexpressed in terms of principal components (eigenimages and their time courses). This representation is determined by the data and optimally factors the spatial and temporal characteristics of the data (see Sirovich and aplan, 2002, and references cited within). Therefore, we first perform a PCA on the imaging data. We follow this step with a multitaper harmonic analysis to estimate the spectrum and the harmonic content of the time series associated with each eigenimage. We then determine the statistically significant periodic harmonic content of the time series of each eigenmode and reassemble the estimated periodic content to form an estimate of the spatiotemporal response to the stimulus. In a given experiment, we estimate and extract the spatiotemporal response to a number of stimuli. As a last step in our analysis, we arrange the extracted responses in vector form and then perform a vector PCA on this set of images. The vector PCA results in a set of element images, one for each stimulus, that all have the same time course. This representation of the data is used since we assume that the time course of the oxy- and deoxyhemoglobin response, but not the spatial characteristics, is the same from stimulus to stimulus. The results of this analysis allow us to compare, side by side, the responses to the different stimuli that are presented to the animal. We give a complete mathematical description of our method in the Appendix. Results In this section, we present results of our method applied to two data sets. The first data set contains simulated data that have been embedded in a set of images of macaque visual cortex with no stimulus presented. The second data set is actual experimental data that comes from the cat visual cortex. Drifting gratings at six orientations were presented to the cat to stimulate the visual cortex for the second data set. Simulated data To construct the simulated data, we begin with two time series f 1 (x, t) and f 2 (x, t). With these we form the vector time series Fig. 1. This checkerboard pattern was added to a movie of macaque visual cortex with no stimulus presented to the animal. The movie of response to no stimulus presentation was used to imitate the physiological and experimental background noise typical of optical imaging experiments. The checkerboard was given an initial amplitude of zero. After 40 frames (of 110 frames for each response) the amplitude was instantaneously increased to of the average image amplitude. F(x, t) f 1(x, t) f 2 (x, t) (1) The data in both f 1 and f 2 were acquired in an experiment where the animal viewed a blank, uniformly illuminated screen. The experimental methods were similar to those described in Everson et al. (1998). This served as background physiological and experimental noise to which a simulated signal was then added. Both f 1 and f 2 contain 20 repeated measurements of noise. Each individual time series was composed of 110 frames. Therefore, 2200 total frames are contained in the full time series. The simulated signal C(x, t) c 1(x, t) c 2 (x, t), (2) where c 1 (x, t) p 1 (x)q(t) and p 1 (x) 1/2 p(x) is the checkerboard pattern shown in Fig. 1 and q(t) is a shifted sgn function with initial amplitude zero and then an instantaneous increase to amplitude of 1 at the 40th frame. The signal c 2 (x, t) p 2 (x)q(t) is the same but with p 2 (x) 1/2 p(x), an inverted checkerboard, with black and white squares reversed. Note that these data are designed to be well represented by the vector decomposition, in that the time courses for both spatial patterns are the same. The checkerboards take value 0 in the black squares and 1 in the white squares. The synthetic data are now taken as D k F k C, (3) where k is the maximum pixel value in the unstimulated images. The choice of as a coefficient is representative of the signal amplitude in real data. Fig. 2 shows log 10 of the multitaper estimated spectrum of the first 80 principal components a n (t). Inspection of the spectrum past principal component 80 showed that higher principal components contained background noise with smooth spectra. As is standard for plotting spectra, we plot log 10 of the spectrum since the variance of spectral estimates is constant with respect to the log of the spectrum. Plotting in this way makes spectral features easier to rec-

3 612 A. Sornborger et al. / NeuroImage 18 (2003) Fig. 5. The difference of the two elements of the first vector principal component. This is an example of a way to separate the nonspecific from the stimulus-specific response. By subtraction, the nonspecific response, which is common to both elements, is eliminated. Fig. 2. log 10 of the multitaper estimated spectrum of the first 80 eigenmodes of the PCA. Regions of characteristic frequency response from our experimental setup are outlined and labeled in black (see text). Blue indicates lowest amplitude, with increasing amplitude depicted with green and yellow and the highest amplitude indicated by red. Fig. 3. log 10 of the multitaper estimated spectrum of the first 80 eigenmodes of the signal extracted using multitaper harmonic analysis. Note the long stacks of harmonics in principal components 15 through 20. These principal components contain much of the signal. the spectra of the principal components is plotted with equal power per principal component. This allows subtle features in the spectra of principal components that contain little power to be seen. Some of the spectral features in Fig. 2 are indicative of features of the particular experimental setup used to acquire these data. We denote these features with outlined boxes in the plot. A narrow band of power owing to the heart rate is seen distributed across eigenmodes at a frequency of approximately three pulses per second. This is the box along ognize as a function of frequency. Note that, for similar reasons, we plot the spectra of the functions a n (t), without multiplying them by the n. By leaving out the factor of n, Fig. 4. The first vector principal component. The first element contains a checkerboard, and the second element, its reverse. These images were obtained without differencing. Fig. 6. The estimated time course of the stimulus (blue, solid line) superimposed with the actual time course (allowing for a global shift of amplitude) of the checkerboard pattern (red, dots connected by lines). Although there is some remaining low frequency noise, the sharp transition from low to high amplitude is accurately captured.

4 A. Sornborger et al. / NeuroImage 18 (2003) Fig. 7. log 10 of the multitaper spectral estimate of principal components of (a) raw data, (b) extracted signal, and (c) residual background noise. Note the clean separation of the harmonic lines making up the signal in (b) from the background noise in (c). The extracted signal is phase locked with the stimulus; all other power in the spectrum is undesirable noise. the top of the plot. A few islands of power are also seen below the pulse frequency. These are due to sinusoidal artifacts from the CCD camera. Typically, these sinusoidal artifacts are well separated by the PCA into separate eigenmodes. Faintly seen at about 0.3 Hz and spread across many principal components is power owing to the respiration. This spectrum was taken before the removal of any harmonic information. We point out that it can be very useful to examine the spectrum of the PCA before doing further analysis. Important features of the data, including aspects of the experimental setup, become readily apparent. Fig. 3 is a plot of the spectrum of the deterministic signal extracted from the data using multitaper harmonic analysis. Note the harmonic stacks around modes 14 through 20. There are about 2 or 3 modes that have long stacks of harmonics. The bulk of the variance contributing to the Fig. 9. Basis images of second vector principal component 1 of optical imaging data from the cat primary visual cortex. Each basis images is 4 5 mm. The stimulus here is oriented drifting gratings, from left to right, top to bottom; orientations are 0, 30, 60, 90, 120, and 150. The second vector principal component is predominantly nonspecific response. However, the low-amplitude stimulus-specific response can be seen by differencing the elements. periodic signal is in these modes. The large contribution of these principal comonents to the signal means that the PCA has already done a fairly good job of decorrelating the signal from the rest of the data. We need to introduce some terminology to refer to the results of our vector PCA: Each principal component consists of a set of images, one for each stimulus (so, in this example, we have two images per principal component). We will refer to these images as elements or element images. We will refer to a principal component as a vector principal component to emphasize the fact that we are using a vector decomposition. Fig. 4 is a plot of the first vector principal component 1 resulting from the second PCA. That is, the PCA of the cleaned reconstructed data containing the retained harmonic components. Note that the signal is clearly visible and that the checkerboards are correctly determined with the first element a positive checkerboard and the second element its reverse. Also note that these images are not differenced and therefore contain baseline amplitude information Fig. 8. Elements of the first vector principal component 1 1, 1 6 of optical imaging data from the cat primary visual cortex. Each element is 4 5 mm. The stimulus here is oriented drifting gratings, from left to right, top to bottom; orientations are 0, 30, 60, 90, 120, and 150. The first vector principal component contains the average picture; therefore, these elements are all very similar. Fig. 10. Basis images of third vector principal component 1 of optical imaging data from the cat primary visual cortex. Each basis image is 4 5 mm. The stimulus here is oriented drifting gratings, from left to right, top to bottom; orientations are 0, 30, 60, 90, 120, and 150. The third vector principal component also contains predominantly nonspecific response.

5 614 A. Sornborger et al. / NeuroImage 18 (2003) for a single stimulus. This information is often discarded in the analysis of optical imaging data, but it can provide useful information that can be of use in the data analysis. With the periodic stacking method, the response to the stimulus is captured solely by analysis of features of the periodically stacked images in the frequency domain. Because of this aspect of our analysis, a nonspecific response can be detected. By a nonspecific response, we mean a time-dependent response phase locked with all stimuli, which is not specific to any particular stimulus. In Fig. 4, the differential response dominates because the simulated data were constructed to have a strong differential component. If a nonspecific response dominated, then the element images of the vector principal components would look very similar. As we will see, this is the case in the analysis of the experimental data of the orientation response in cat cortex that we present below. Fig. 5 shows the difference of the two elements of the first vector principal component 1. We may difference elements of the vector principal components to separate the nonspecific from the specific response, since, by subtraction, the nonspecific response, which is common to both basis images, is eliminated. In Fig. 6, we plot the reconstructed timecourse of the signal superimposed with the input time course (which has been moved on the vertical axis to line up with the reconstructed time course). Although this time course is somewhat noisy at low frequencies, the sharp step is well represented, and the difference in signal amplitude is accurately captured. Orientation data We now apply our method to optical imaging data recorded from the primary visual cortex of a cat that viewed drifting square-wave gratings at various orientations. Twenty separate image sequences, each with 128 frames of 400 ms duration (giving a time series of 51.2 s duration), were taken during the response to a single stimulus. Six different stimuli with orientations of 0, 30, 60, 90, 120, and 150 were presented to the animal. The experimental methods were similar to those described in Everson et al. (1998). We analyzed the data using the method of periodic stacking as described above: first we performed a PCA on the 20 concatenated image sequences (a total of frames), and then we extracted the statistically significant harmonic content. This resulted in six cleaned, reconstructed data sets, f 1,, f 6,whichcomposedthevectordatasetF ( f 1,, f 6 ). Finally, we performed a vector PCA on the reconstructed data. In Fig. 7, we plot the log 10 of three multitaper estimates of spectra: (Fig. 7a) the raw data, (Fig. 7b) the extracted signal, and (Fig. 7c) the background noise. Here, the harmonics corresponding to the signal we want to extract are visible in the spectrum of the raw data in Fig. 7a. However, they are mixed in with considerable background noise. Note in Fig. 7b how the extracted signal is cleanly extracted, leaving little trace of the harmonics in Fig. 7c, the spectrum of the residual background noise. There are a number of signal components in the data, most of which, like respiration and heartrate are not in phase with the stimulus and are therefore not extracted with our method. These we consider to be background noise. The extracted signal components are all phase locked with the stimuli and are therefore part of the response to stimulus. As we mentioned above in the discussion of the simulated data, some of the components of the extracted response are evoked by all orientation stimuli, and others are specific to a particular stimulus. We refer to signal components that respond to all of the stimuli as nonspecific; we refer to those that respond to a particular stimulus as stimulus specific or the differential response. A main result of our analysis was that the dominant response to a stimulus is nonspecific. All stimuli are phase locked to a wave component moving across the cortex. In addition to this nonspecific component, we also found stimulus-specific differences in response. This differential component is the orientation response. In this experiment, the total power in the raw data is 1 order of magnitude (9.8 times) larger than that in the estimated nonspecific component, which is 1/2 order of magnitude (5.4 times) larger than that in the differential component. In the vector PCA, a strong nonspecific response gives rise to element images that look alike. The differential response may be seen as differences between element images. In the sequence of vector principal components from our analysis of the orientation response in the cat cortex, differences between the element images are small in low index principal components, but become more noticeable as component number increases. These small differences make up the stimulus-specific response. In Figs. 8, 9, 10 we plot the first three vector principal components, and in Fig. 11, we plot their respective time courses with one-sigma error bars (see the section Mathematical background ). The similarity between the pictures indicates that the bulk of the cortical response is nonspecific. Our method has extracted a number of features of the nonspecific response that appear in these data. Let us consider the time courses of the first few vector principal components, plotted in Fig. 11. These are dominated by the nonspecific response. The stimulus in these data begins at approximately 18 s (the location of the vertical red line in the panels of Fig. 11). Before the stimulus is presented there are oscillations with a frequency of 0.1 Hz that are phase locked to the stimulus. After the stimulus appears, a largeamplitude oscillation with one period that resets the oscillatory phase occurs, and then the oscillations are damped, but canstill be seen, particularly in time course a 3 (t). The

6 A. Sornborger et al. / NeuroImage 18 (2003) Hz oscillations are the vasomotor oscillations commonly found (Golanov et al., 1994; Mayhew et al., 1996) in neuroimaging experiments. How can these oscillations be locked to the stimulus before the stimulus begins? We suspect that the reason is that although the orientation of the stimulus is randomly varied throughout the experiment, the timing of presentation of the stimuli during the experiment is still periodic. The implication is that when the presentation of a stimulus ends, oscillations are set up with a particular phase, which is then found preceding the next stimulus, and subsequently the same effect occurs in each stimulus afterward. Thus, stimulation can cause phase locking of vasomotor oscillations. The multitaper harmonic analysis that we use to extract the harmonic content of periodically stacked data may also be used to get an estimate of the frequency of the prestimulus oscillations. In Fig. 12, we plot log 10 of the F statistic as a function of frequency. The red line indicates the statistical threshold. The oscillations are identified as the peak above threshold at 0.1 Hz. As an example of a technique for investigating the time course of the differential response, in Fig. 13 we show a series of images from the difference time course taken by subtracting the response (cleaned with our multitaper harmonic analysis method) to a vertical stimulus from that to a horizontal stimulus n a n t 1 n t 4 n t (4) n resulting in a movie of the difference in response to two orthogonal stimuli. Notice the orientation patches arising after the onset of the stimulus. The signal begins just before the fifth image at 18 s (first on left, second row) and is slightly visible. By the next image, where the dappled orientation response is evident, the signal has had time to rise and saturate. Our method, as well as providing information about the dynamics of the response, provides a cleaned data set with considerably higher signal-to-noise ratio than the raw data for further analysis. In Fig. 14, we plot the results of a generalized indicator function analysis (GIFA) on cleaned data. GIFA is a method for generating spatial maps that characterize the statistically significant static response to a set of stimuli (Yokoo et al., 2001). In Fig. 14, left panel, we plot an indicator function that represents characteristics of the orientation response calculated from raw data, and in Fig. 14, right panel, we plot an indicator function from data reconstructed with our periodic stacking method. The amplitude of the response from the reconstructed data is twice that of the raw data. This improvement leads to better characterization of the functional architecture of the cortex. In addition to their use for analyzing the cortical signal, we have found that estimates of dynamic information in the signal can be very useful for other reasons. The method of periodic stacking has helped us troubleshoot our experiments. For instance, we were able to determine that one of our CCD cameras was dropping frames at the onset of the stimulus in one experimental setup, giving rise to a sharp step in the time courses of the principal components. Discussion It is important to emphasize that the periodic stacking approach is a method for distinguishing and extracting signal from noise in the time domain. As an initial step, for multivariate data, we use principal component analysis to compress the data. By retaining the first 100 or so principal components, we usually capture more than 99.9% of the variance in the multivariate signal. However, other methods for analysis of multivariate data can also be used at this point. For instance, ICA may be used. Although, ICA provides orthogonal components in either spatial or temporal components, but not both, the same periodic stacking approach may be used on the resulting time courses. In general, the rule of thumb is that the better a basis is, the fewer components there are that contain the same amount of variance. ICA is known to be useful for detecting independent, linearly superimposed signals. We also want to mention that switching the order of attack, that is, first averaging in time (i.e., trial averaging) and then applying frequency analysis, would result in the inclusion of (potentially) large amounts of spurious noise to be included in the results. The usefulness of multitaper harmonic analysis comes from the robustness of regressing on a set of approximately orthogonal estimates of frequency content. The bandwidth of the estimates is set by the number of periods of oscillation in the time series. Therefore, better results will always result by analyzing as many periods as possible, that is, using the harmonic analysis to estimate the average, rather than averaging before using the harmonic analysis. Finally, we want to note that our method is very generally applicable to multivariate data acquired in stimuluslocked experiments. From this point of view, it is applicable to fmri data, evoked potentials in EEG data, MEG, and many other types of biological imaging data. Conclusions We have developed a spatiotemporal data analysis method, called the method of periodic stacking, that gives an improved estimate of the response to stimulus relative to the more traditional methods of differential imaging. With this method we extract temporal information, clean our data

7 616 A. Sornborger et al. / NeuroImage 18 (2003) Fig. 11. Time courses a 1,2,3 (t), with one-sigma error bars, of the first three principal components. The vertical line indicates the onset of the stimulus. Oscillatory behavior precedes the stimulus, is followed by a relatively strong single oscillation, and then becomes damped during the persistent presentation of the stimulus. Fig. 12. A plot of the statistical significance of sinusoids from a multitaper harmonic analysis of the prestimulus segment of a 2 (t). Significance threshold is plotted in red. The statistically significant harmonic at approximately 0.1 Hz is evident. Associated with this temporal oscillation is a moving wave across the cortex. of experimental artifacts, separate stimulus-specific responses from nonspecific responses, and increase the signalto-noise ratio of our measurements by making use of characteristics of a periodic signal in the frequency basis. These features provide a powerful tool for investigating the dynamical aspects of imaging.

8 A. Sornborger et al. / NeuroImage 18 (2003) Fig. 13. Frames at 0.4, 5.2, 10, 14.8, 19.6 (stimulus between this and previous frame), 24.4, 29.2, 34, 38.8, 43.6, 48.4, and 51.2 s of the 51.2-s difference movie of time courses of two orthogonal orientations. Frames are plotted left to right, top to bottom, in increasing order. Each frame is 4 5 mm in area. The stimulus begins at 18 s, and the signal is partially visible at the fifth image (first on left, second row), 1 s or so after the onset of the stimulus, and becomes clearly evident by the sixth image (second from left, second row). Acknowledgments This work was supported by NIH-NEI EY11276, NIH- NIMH MH50166, and NIH-NEI EY E.. is the Jules & Doris Stein Research to Prevent Blindness Professor at the Ophthalmology Department at Mt. Sinai. We thank Bruce night for helpful discussions and comments on the manuscript and Takeshi Yokoo for helpful discussions and help with his generalized indicator function method. Appendix Method of periodic stacking mathematical background and description Fig. 14. Comparison of results from postanalysis of data cleaned with the method of periodic stacking presented in this paper. (Left) Indicator function (see text for description of indicator function methods) obtained from raw data; (right) indicator function obtained from cleaned data. The amplitude of the indicator function on the right is twice that on the left, leading to better characterization of the functional architecture of the cortex. For simplicity of exposition we make the (unnecessary) assumption that for each stimulus type, j, we have a collection of P responses s i (n, x, j ), i 1,, P and that data have been acquired at a uniform sampling interval t, at times t nt, where n 1,, N. For later purposes we define 1 D Nt, so that D is the duration of a single episode, and is the corresponding frequency. Although each individual time series is nonstationary, we construct a putatively stationary time series by concatenating the s i, thus forming the full time series F(t, x, j ) s 1, s 2,, s P of total duration PD. Here, j 1,, M is an index that represents the M different stimuli under consideration. We will refer to this procedure and the following analysis as the method of periodic stacking. After assembling the data in this manner, we perform a vector PCA, outlined below. In developing the PCA decomposition, we follow the procedure given in Sirovich and Everson (1992). We express F(t, x, j ) as a vector F(t, x) (F) j F(t, x, j ) (5) and seek a decomposition in factored form F n a n t n x), (6) n where {a n (t)} and { n (x)} are orthonormal. a n, a m t a n ta m t nm (7) t and ( n, m ) x x n x) m x) nm, (8) where denotes transpose and the dot indicates the usual vector inner product. The n are initially unknown constants. By using the vector representation for the PCA, the spatial components n (x) of the decomposition each have the same temporal response a n (t). The response in optical

9 618 A. Sornborger et al. / NeuroImage 18 (2003) imaging data is due to changes in reflectance of cortical tissue. The vector formulation adopted above incorporates the full range of correlations latent in the data (Sirovich, 1987). It can justifiably be adopted when the responses are a consequence of like stimuli, such as sinusoidal patterns at various orientations. There would be no justification to this approach when confronted by responses to unrelated stimuli, say color and direction. Next, it follows that for Eq. (6) to hold under Eqs. (7) and (8), and a n 1 n ( n, F) x (9) n 1 n a n, F t. (10) If Eq. (10) is substituted into Eq. (9), we obtain Ct, sa n s n a n t (11) s with Ct, s Ft, x, Fs, x x (12) and n n 2. Alternatively, if Eq. (9) is substituted in Eq. (10), we obtain y x, y n y n n x, (13) where the matrix operator is given by x, y F t, x, Ft, y t. (14) In Eqs. (12) and (14), C(t, s) is the autocovariance, and is the spatial correlation. It is immediate that both C and are symmetric nonnegative operators and hence, by standard theorems, give rise to complete, orthonormal eigenfunction sets. This justifies the assertion of the decomposition Eq. (6). Only one of the eigenproblems needs to be solved, since either Eq. (9) or Eq. (10) furnishes the undetermined complementary eigenfunctions. In virtually all image analysis cases, in which there are fewer pictures than there are pixels in each picture, Eq. (11) is the problem of choice since it leads to the smaller matrix diagonalization. The sought after n (x) is then determined from Eq. (10), which clearly shows that each n (x) is composed of a sum of snapshots of the images. As originally derived (Sirovich, 1987), this was therefore referred to as the method of snapshots. Spectral analysis An essential element of our deliberations is the treatment of the time series of the coefficients {a n (t)} and thus, by implication, the time series for each pixel. For this purpose we introduce Thomson s adaptive multitaper approach (Percival and Walden, 1993; Thomson, 1982). For the sake of completeness, we now present a brief overview of this method. A typical coefficient in Eq. (6) will be denoted by a(t). In continuum notation, we write its Fourier transform as ã e 2it atdt (15) in terms of which Parseval s relation states at 2 dt ã 2 d. (16) By inference, ã 2 is the power spectral density (psd). For stationary signals of zero mean, which are of interest to us, the psd, according to Wiener theory, is instead given by the Fourier transform of the autocorrelation. We will now air some of the pitfalls and difficulties in calculating the psd, from the point of view of the naive approach. Define the boxcar function 1, t T/2 h T t 0, t T/2 (17) and from it, the truncated signal a T t h T tat. (18) The Fourier transform of Eq. (18) is given by the convolution where ã T h Tã, (19) h T sin T T sinc(t). (20) It is clear from Eqs. (19) and (20) that the contribution to a T () 2 from the frequency is proportional to 2 sin T A T (21) which is plotted in Fig. 15 (bottom panel). It is clear from this figure that there is leakage, also called bias, from the power at to a wide spectrum of other frequencies. Similar leakage is found in the use of the finite Fourier transform to calculate the psd, S p n T/2 e i2n0t T atdt, (22) T/2 where 0 1/T. This, known as the periodogram, only furnishes a sampled spectrum, that is, at multiples of 0. Later, we will show how the multitaper method provides a spectral estimate with significantly less leakage. In practice, a(t), and hence ã(), is stochastic (since it is a sum of random variables it is also Gaussian, a fact that will

10 A. Sornborger et al. / NeuroImage 18 (2003) series. Thomson (1982) investigated the prospect of finding tapers, w(t), which would optimize the spectral concentration, thereby minimizing the leakage in a spectral estimate. In an earlier related context, Slepian (1964) showed that the solution to this problem can be found by solving the eigenfunction problem, 1 1 sinct twtdt wt. (26) Fig. 15. The square of the Fourier transform of two tapers (see definition in text). The top panel shows the Fourier transform of a prolate spheroidal function used in multitaper estimates. Note the narrow central peak and the low-amplitude wings. Contrasting with the top panel is the bottom panel, which depicts the square of the Fourier transform of a boxcar function, the taper associated with the periodogram spectral estimate. Note the highamplitude wings showing significant leakage. figure in our later deliberations). Thus the psd exhibits variance. Taking a larger truncation, for example, L a LT t j1 L h T t j 1Tat a T j t (23) j1 does not help with the variance problem; it only changes the frequency resolution from 0 to 0 /N. The variance remains the same at the more finely sampled frequencies. To reduce the variance, we consider the set {a j T (t)} defined in Eq. (23). (The above-defined a T (t) is just a 1 T (t).) Each of these leads to an estimate of the psd, say S j T ã j T 2, (24) which we may calculate by means of the finite Fourier transform, as we did with the periodogram Eq. (22). Since Eq. (24), for all j, are defined at the same frequencies, we can calculate the average over the L samples. S T S j T j. (25) For a time stationary signal, this is tantamount to calculating the autocorrelation by temporal averaging and is thus consistent with Wiener theory for the construction of the psd. The boxcar functions h T (t (j 1)T) are examples of tapers. Tapers are functions chosen to multiply the time Both a temporal interval T and a frequency interval W have been normalized out of this eigenfunction problem (Percival and Walden, 1993). The kernel is symmetric, and the eigenfunctions therefore form an orthogonal set {w k (t)}. The eigenvalues can be shown to be nonnegative, and each measures the degree of spectral concentration. The first of these is depicted, in appropriate coordinates, in Fig. 15 (top panel). The great improvement in bias over the square window h T (t) is evident. These functions are called prolate spheroidal functions or, more commonly, Slepian functions. They are parametrized by the so-called time bandwidth product, TW. The first gives the best spectral concentration, the second is best in the orthogonal space, and so forth. In Thomson s multitaper methods, the leakage or bias is lowered by considering the spectrum derived from w k (t)a(t), where k 1,, ranges over tapers with good frequency concentration, while variance is decreased by averaging over the associated spectral estimates, as was done with the boxcar tapers in Eq. (25). Harmonic analysis In our application, the admissible frequencies are already known to be integer multiples of the base frequency,, since the data have been deliberately assembled with this in mind. Our objective is to find the best approximation of the signal, while removing as much noise (i.e., all nonperiodic or variable phase components of the signal) as possible. This requires phase, as well as frequency, information, and hence the power spectral density alone does not suffice. To achieve this, we consider a typical coefficient of {a n (t)} denoted by a(t) and write at a 0 t t, (27) where a 0 (t) is the signal we want to extract and (t) is noise, assumed to be locally white (but can be colored at separations of bandwidth larger than W). In Fourier space at ã0 e 2it d. (28) For the purpose of exposition, we consider the signal to be located in one narrow band at ã 0 1 e 2it d 1 e 2i1t d. (29)

11 620 A. Sornborger et al. / NeuroImage 18 (2003) Next, we denote a taper by w k (t) and for notational simplicity suppress T and W. Consider J k at; and therefore e 2it k t ã e 2i1t dt (30) J k at; 1 k0ã (31) for k 1,,. We can now use a least squares fit to find ã 0 ( 1 ). J ã 0 k at; 1 w k0ã (32) k1 which thus yields â 0 1 J k at; 1 w k0 k1. (33) w k2 0 k1 We can construct a ratio of two variances (Miller, 1973), which is F-distributed with 2 and 2 2 degrees of freedom under the null hypothesis that ã 0 0 and â 0 is Gaussian distributed with zero mean. The ratio F, referred to as the F statistic, takes the form where F 1â w k2 0 k1. (34) J k at; 1 Ĵ k at; 1 2 k1 Ĵ k at; 1 â 0 1 w k0 (35) If a 0 0, then it is suggested in Mitra and Pesaran (1999) that the above distribution should exceed 1 1/N, where N is the length of the time series. The above estimate for a 0 (Eq. (33)), combined with the F statistic (Eq. (34)) and the test that the estimated quantity exceed 1 1/N, form an F test for the statistical significance of a sinusoidal component in the time series. The results from the regression analysis may be used to calculate error bars for each extracted harmonic. The variance of an estimated harmonic is Var(â) S N, (36) w 0 k2 k1 where the local continuous part of the spectrum may be estimated S N 1 J k at; 1 Ĵ k at; 1 2. (37) 1 One-sigma error bars, for example, may then be estimated as Var(â()) 1/2. With the above F test in hand, we determine and extract the statistically significant harmonic content at multiples of the base harmonic D of each a(t) obtained from the PCA of the raw data, which we denote a h (t). We will refer to this harmonic content as a harmonic stack. It is important to note that with the harmonic content, we include the signal average. This is important, since, in the analysis of optical imaging data, the average picture has spatial information correlated with the stimulus. For instance, the standard difference mentioned in the Introduction uses only this information to determine the spatial signature of responses to various stimuli. Vector principal component analysis With this in mind, we then throw away the nonharmonic content of the principal components and reconstruct a new set of images N F h x, t n a h t n x. (38) n1 As a last step, we perform a PCA on this data set, obtaining a modified decomposition N F h x, t n a n t n x. (39) n1 The purpose of this second PCA is to decorrelate the various components of the reconstructed signal F h. This is also necessary since the extracted harmonic content a h (t) of the original PCA is no longer orthogonal. In this final PCA, the first few modes contain the primary harmonic content. We take this to be the sought after response or signal. References Arieli, A., Sterkin, A., Grinvald, A., Aertsen, A., Dynamics of on-going activity: explanation of the large variabity in evoked responses. Science 273, Bell, A., Scjnowski, T., An information maximization approach to blind separation and blind deconvolution. Neural Comput. 7, Everson, R., night, B., Sirovich, L., Separating spatially distributed response to stimulation from background. I. Optical imaging. Biol. Cybern. 77, Everson, R., Prashanth, A., Gabbay, M., night, B., Sirovich, L., aplan, E., Representation of spatial frequency and orientation in the visual cortex. PNAS 95,

12 A. Sornborger et al. / NeuroImage 18 (2003) Gabbay, M., Brennan, C., aplan, E., Sirovich, L., A principal components-based method for the detection of neuronal activity maps: application to optical imaging. NeuroImage 11, Golanov, E., Yamamoto, S., Reis, D., Spontaneous waves of cerebral blood flow associated with a pattern of electrocortical activity. Am. J. Physiol. 266, R204 R214. Mayhew, J., Askew, S., Zheng, Y., Porrill, J., Westby, G., Redgrave, P., Rector, D., Harper, R., Cerebral vasomotion: a 0.1-Hz oscillation in reflected light imaging of neural activity. NeuroImage 4, Miller,., Complex linear least squares. SIAM Rev. 15, Mitra, P., Pesaran, B., Analysis of dynamic brain imaging data. Biophys. J. 76, Percival, D., Walden, A., Spectral Analysis for Physical Applications. Cambridge University Press, New York. Sirovich, L., Turbulence and the dynamics of coherent structures, parts i, ii, and iii. Q. Appl. Math. XLV(3), Sirovich, L., Everson, R., Management and analysis of large scientific datasets. Int. J. Supercomput. Appl. 6, Sirovich, L., aplan, E., Analysis methods for optical imaging, in: Frostig, R.D. (Ed.), In Vivo Optical Imaging of Brain Function, CRC Press, Boca Raton, FL, pp Slepian, D., Prolate spheroidal wave functions, Fourier analysis and uncertainty iv. Bell Syst. Tech. J. 43, Thomson, D., Spectrum estimation and harmonic analysis. Proc. IEEE 70, Yokoo, T., night, B., Sirovich, L., An optimization approach to signal extraction from noisy multivariate data. NeuroImage 14,

An Optimization Approach to Signal Extraction from Noisy Multivariate Data

An Optimization Approach to Signal Extraction from Noisy Multivariate Data NeuroImage 14, 1309 1326 (2001) doi:10.1006/nimg.2001.0950, available online at http://www.idealibrary.com on An Optimization Approach to Signal Extraction from Noisy Multivariate Data T. Yokoo,*,1 B.

More information

Image Analysis, A Tutorial Part 1. Basics

Image Analysis, A Tutorial Part 1. Basics Image Analysis, A Tutorial Part 1. Basics Lawrence Sirovich June 1, 2001 1 Introduction Generally, in these notes we will be interested in image data. (The underlying analysis and mathematical apparatus

More information

Nonlinear reverse-correlation with synthesized naturalistic noise

Nonlinear reverse-correlation with synthesized naturalistic noise Cognitive Science Online, Vol1, pp1 7, 2003 http://cogsci-onlineucsdedu Nonlinear reverse-correlation with synthesized naturalistic noise Hsin-Hao Yu Department of Cognitive Science University of California

More information

Signal Modeling Techniques in Speech Recognition. Hassan A. Kingravi

Signal Modeling Techniques in Speech Recognition. Hassan A. Kingravi Signal Modeling Techniques in Speech Recognition Hassan A. Kingravi Outline Introduction Spectral Shaping Spectral Analysis Parameter Transforms Statistical Modeling Discussion Conclusions 1: Introduction

More information

Analysis of Dynamic Brain Imaging Data

Analysis of Dynamic Brain Imaging Data Biophysical Journal Volume 76 February 1999 691 708 691 Analysis of Dynamic Brain Imaging Data P. P. Mitra and B. Pesaran Bell Laboratories, Lucent Technologies, Murray Hill, New Jersey 07974 USA ABSTRACT

More information

Journal of Neuroscience Methods

Journal of Neuroscience Methods Journal of Neuroscience Methods 203 (2012) 254 263 Contents lists available at SciVerse ScienceDirect Journal of Neuroscience Methods j o ur nal homep age: www.elsevier.com/locate/jneumeth A multivariate,

More information

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES

2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2. SPECTRAL ANALYSIS APPLIED TO STOCHASTIC PROCESSES 2.0 THEOREM OF WIENER- KHINTCHINE An important technique in the study of deterministic signals consists in using harmonic functions to gain the spectral

More information

Modelling temporal structure (in noise and signal)

Modelling temporal structure (in noise and signal) Modelling temporal structure (in noise and signal) Mark Woolrich, Christian Beckmann*, Salima Makni & Steve Smith FMRIB, Oxford *Imperial/FMRIB temporal noise: modelling temporal autocorrelation temporal

More information

V(t) = Total Power = Calculating the Power Spectral Density (PSD) in IDL. Thomas Ferree, Ph.D. August 23, 1999

V(t) = Total Power = Calculating the Power Spectral Density (PSD) in IDL. Thomas Ferree, Ph.D. August 23, 1999 Calculating the Power Spectral Density (PSD) in IDL Thomas Ferree, Ph.D. August 23, 1999 This note outlines the calculation of power spectra via the fast Fourier transform (FFT) algorithm. There are several

More information

Wavelets and Multiresolution Processing

Wavelets and Multiresolution Processing Wavelets and Multiresolution Processing Wavelets Fourier transform has it basis functions in sinusoids Wavelets based on small waves of varying frequency and limited duration In addition to frequency,

More information

New Machine Learning Methods for Neuroimaging

New Machine Learning Methods for Neuroimaging New Machine Learning Methods for Neuroimaging Gatsby Computational Neuroscience Unit University College London, UK Dept of Computer Science University of Helsinki, Finland Outline Resting-state networks

More information

7. Variable extraction and dimensionality reduction

7. Variable extraction and dimensionality reduction 7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality

More information

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods

More information

ARTEFACT DETECTION IN ASTROPHYSICAL IMAGE DATA USING INDEPENDENT COMPONENT ANALYSIS. Maria Funaro, Erkki Oja, and Harri Valpola

ARTEFACT DETECTION IN ASTROPHYSICAL IMAGE DATA USING INDEPENDENT COMPONENT ANALYSIS. Maria Funaro, Erkki Oja, and Harri Valpola ARTEFACT DETECTION IN ASTROPHYSICAL IMAGE DATA USING INDEPENDENT COMPONENT ANALYSIS Maria Funaro, Erkki Oja, and Harri Valpola Neural Networks Research Centre, Helsinki University of Technology P.O.Box

More information

Introduction to Biomedical Engineering

Introduction to Biomedical Engineering Introduction to Biomedical Engineering Biosignal processing Kung-Bin Sung 6/11/2007 1 Outline Chapter 10: Biosignal processing Characteristics of biosignals Frequency domain representation and analysis

More information

HST 583 FUNCTIONAL MAGNETIC RESONANCE IMAGING DATA ANALYSIS AND ACQUISITION A REVIEW OF STATISTICS FOR FMRI DATA ANALYSIS

HST 583 FUNCTIONAL MAGNETIC RESONANCE IMAGING DATA ANALYSIS AND ACQUISITION A REVIEW OF STATISTICS FOR FMRI DATA ANALYSIS HST 583 FUNCTIONAL MAGNETIC RESONANCE IMAGING DATA ANALYSIS AND ACQUISITION A REVIEW OF STATISTICS FOR FMRI DATA ANALYSIS EMERY N. BROWN AND CHRIS LONG NEUROSCIENCE STATISTICS RESEARCH LABORATORY DEPARTMENT

More information

Wavelet Methods for Time Series Analysis. Motivating Question

Wavelet Methods for Time Series Analysis. Motivating Question Wavelet Methods for Time Series Analysis Part VII: Wavelet-Based Bootstrapping start with some background on bootstrapping and its rationale describe adjustments to the bootstrap that allow it to work

More information

(a)

(a) Chapter 8 Subspace Methods 8. Introduction Principal Component Analysis (PCA) is applied to the analysis of time series data. In this context we discuss measures of complexity and subspace methods for

More information

Independent Component Analysis. Contents

Independent Component Analysis. Contents Contents Preface xvii 1 Introduction 1 1.1 Linear representation of multivariate data 1 1.1.1 The general statistical setting 1 1.1.2 Dimension reduction methods 2 1.1.3 Independence as a guiding principle

More information

60 ms. 0 ms 10 ms 30 ms. 100 ms 150 ms 200 ms 350 ms. 400 ms 470 ms 570 ms 850 ms. 950 ms 1025 ms 1100 ms 1200 ms

60 ms. 0 ms 10 ms 30 ms. 100 ms 150 ms 200 ms 350 ms. 400 ms 470 ms 570 ms 850 ms. 950 ms 1025 ms 1100 ms 1200 ms POSITION AND VELOCITY ESTIMATION IN THE VISUAL CORTEX Bijoy K. Ghosh 1 Zoran Nenadic Department of Systems Science and Mathematics Campus Box 1040 Washington University Saint Louis, MO 63130 U.S.A. Abstract:

More information

Consider the following spike trains from two different neurons N1 and N2:

Consider the following spike trains from two different neurons N1 and N2: About synchrony and oscillations So far, our discussions have assumed that we are either observing a single neuron at a, or that neurons fire independent of each other. This assumption may be correct in

More information

Data Processing and Analysis

Data Processing and Analysis Data Processing and Analysis Rick Aster and Brian Borchers September 10, 2013 Energy and Power Spectra It is frequently valuable to study the power distribution of a signal in the frequency domain. For

More information

Spatial Harmonic Analysis of EEG Data

Spatial Harmonic Analysis of EEG Data Spatial Harmonic Analysis of EEG Data Uwe Graichen Institute of Biomedical Engineering and Informatics Ilmenau University of Technology Singapore, 8/11/2012 Outline 1 Motivation 2 Introduction 3 Material

More information

EEG- Signal Processing

EEG- Signal Processing Fatemeh Hadaeghi EEG- Signal Processing Lecture Notes for BSP, Chapter 5 Master Program Data Engineering 1 5 Introduction The complex patterns of neural activity, both in presence and absence of external

More information

Statistical Analysis Aspects of Resting State Functional Connectivity

Statistical Analysis Aspects of Resting State Functional Connectivity Statistical Analysis Aspects of Resting State Functional Connectivity Biswal s result (1995) Correlations between RS Fluctuations of left and right motor areas Why studying resting state? Human Brain =

More information

Wavelet Methods for Time Series Analysis. Part IX: Wavelet-Based Bootstrapping

Wavelet Methods for Time Series Analysis. Part IX: Wavelet-Based Bootstrapping Wavelet Methods for Time Series Analysis Part IX: Wavelet-Based Bootstrapping start with some background on bootstrapping and its rationale describe adjustments to the bootstrap that allow it to work with

More information

Observed Brain Dynamics

Observed Brain Dynamics Observed Brain Dynamics Partha P. Mitra Hemant Bokil OXTORD UNIVERSITY PRESS 2008 \ PART I Conceptual Background 1 1 Why Study Brain Dynamics? 3 1.1 Why Dynamics? An Active Perspective 3 Vi Qimnü^iQ^Dv.aamics'v

More information

Advanced Introduction to Machine Learning CMU-10715

Advanced Introduction to Machine Learning CMU-10715 Advanced Introduction to Machine Learning CMU-10715 Independent Component Analysis Barnabás Póczos Independent Component Analysis 2 Independent Component Analysis Model original signals Observations (Mixtures)

More information

A Multivariate Time-Frequency Based Phase Synchrony Measure for Quantifying Functional Connectivity in the Brain

A Multivariate Time-Frequency Based Phase Synchrony Measure for Quantifying Functional Connectivity in the Brain A Multivariate Time-Frequency Based Phase Synchrony Measure for Quantifying Functional Connectivity in the Brain Dr. Ali Yener Mutlu Department of Electrical and Electronics Engineering, Izmir Katip Celebi

More information

Advanced Introduction to Machine Learning CMU-10715

Advanced Introduction to Machine Learning CMU-10715 Advanced Introduction to Machine Learning CMU-10715 Principal Component Analysis Barnabás Póczos Contents Motivation PCA algorithms Applications Some of these slides are taken from Karl Booksh Research

More information

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A. 2017-2018 Pietro Guccione, PhD DEI - DIPARTIMENTO DI INGEGNERIA ELETTRICA E DELL INFORMAZIONE POLITECNICO DI

More information

Event-related fmri. Christian Ruff. Laboratory for Social and Neural Systems Research Department of Economics University of Zurich

Event-related fmri. Christian Ruff. Laboratory for Social and Neural Systems Research Department of Economics University of Zurich Event-related fmri Christian Ruff Laboratory for Social and Neural Systems Research Department of Economics University of Zurich Institute of Neurology University College London With thanks to the FIL

More information

Spectral Interdependency Methods

Spectral Interdependency Methods Spectral Interdependency Methods Mukesh Dhamala* Department of Physics and Astronomy, Neuroscience Institute, Georgia State University, Atlanta, Georgia, USA Synonyms Coherence and Granger causality spectral

More information

Biomedical Signal Processing and Signal Modeling

Biomedical Signal Processing and Signal Modeling Biomedical Signal Processing and Signal Modeling Eugene N. Bruce University of Kentucky A Wiley-lnterscience Publication JOHN WILEY & SONS, INC. New York Chichester Weinheim Brisbane Singapore Toronto

More information

Neuroscience Introduction

Neuroscience Introduction Neuroscience Introduction The brain As humans, we can identify galaxies light years away, we can study particles smaller than an atom. But we still haven t unlocked the mystery of the three pounds of matter

More information

Imaging of vibrating objects using speckle subtraction

Imaging of vibrating objects using speckle subtraction Rollins College Rollins Scholarship Online Student-Faculty Collaborative Research 8-1-2010 Imaging of vibrating objects using speckle subtraction Thomas R. Moore TMOORE@rollins.edu Ashley E. Cannaday Sarah

More information

Computational Data Analysis!

Computational Data Analysis! 12.714 Computational Data Analysis! Alan Chave (alan@whoi.edu)! Thomas Herring (tah@mit.edu),! http://geoweb.mit.edu/~tah/12.714! Introduction to Spectral Analysis! Topics Today! Aspects of Time series

More information

Short Course III Neural Signal Processing: Quantitative Analysis of Neural Activity. Organized by Partha Mitra, PhD

Short Course III Neural Signal Processing: Quantitative Analysis of Neural Activity. Organized by Partha Mitra, PhD Short Course III Neural Signal Processing: Quantitative Analysis of Neural Activity Organized by Partha Mitra, PhD Short Course III Neural Signal Processing: Quantitative Analysis of Neural Activity Organized

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Data Preprocessing Tasks

Data Preprocessing Tasks Data Tasks 1 2 3 Data Reduction 4 We re here. 1 Dimensionality Reduction Dimensionality reduction is a commonly used approach for generating fewer features. Typically used because too many features can

More information

Principal Component Analysis

Principal Component Analysis B: Chapter 1 HTF: Chapter 1.5 Principal Component Analysis Barnabás Póczos University of Alberta Nov, 009 Contents Motivation PCA algorithms Applications Face recognition Facial expression recognition

More information

Comparison of spectral decomposition methods

Comparison of spectral decomposition methods Comparison of spectral decomposition methods John P. Castagna, University of Houston, and Shengjie Sun, Fusion Geophysical discuss a number of different methods for spectral decomposition before suggesting

More information

The homogeneous Poisson process

The homogeneous Poisson process The homogeneous Poisson process during very short time interval Δt there is a fixed probability of an event (spike) occurring independent of what happened previously if r is the rate of the Poisson process,

More information

What is NIRS? First-Level Statistical Models 5/18/18

What is NIRS? First-Level Statistical Models 5/18/18 First-Level Statistical Models Theodore Huppert, PhD (huppertt@upmc.edu) University of Pittsburgh Departments of Radiology and Bioengineering What is NIRS? Light Intensity SO 2 and Heart Rate 2 1 5/18/18

More information

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE

5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER /$ IEEE 5742 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 55, NO. 12, DECEMBER 2009 Uncertainty Relations for Shift-Invariant Analog Signals Yonina C. Eldar, Senior Member, IEEE Abstract The past several years

More information

c 2010 Melody I. Bonham

c 2010 Melody I. Bonham c 2010 Melody I. Bonham A NEAR-OPTIMAL WAVELET-BASED ESTIMATION TECHNIQUE FOR VIDEO SEQUENCES BY MELODY I. BONHAM THESIS Submitted in partial fulfillment of the requirements for the degree of Master of

More information

Independent Component Analysis

Independent Component Analysis Independent Component Analysis James V. Stone November 4, 24 Sheffield University, Sheffield, UK Keywords: independent component analysis, independence, blind source separation, projection pursuit, complexity

More information

Slepian Functions Why, What, How?

Slepian Functions Why, What, How? Slepian Functions Why, What, How? Lecture 1: Basics of Fourier and Fourier-Legendre Transforms Lecture : Spatiospectral Concentration Problem Cartesian & Spherical Cases Lecture 3: Slepian Functions and

More information

7 The Waveform Channel

7 The Waveform Channel 7 The Waveform Channel The waveform transmitted by the digital demodulator will be corrupted by the channel before it reaches the digital demodulator in the receiver. One important part of the channel

More information

Let us consider a typical Michelson interferometer, where a broadband source is used for illumination (Fig. 1a).

Let us consider a typical Michelson interferometer, where a broadband source is used for illumination (Fig. 1a). 7.1. Low-Coherence Interferometry (LCI) Let us consider a typical Michelson interferometer, where a broadband source is used for illumination (Fig. 1a). The light is split by the beam splitter (BS) and

More information

Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video

Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video Jarmo Hurri and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O.Box 98, 215 HUT, Finland

More information

Lecture Notes 5: Multiresolution Analysis

Lecture Notes 5: Multiresolution Analysis Optimization-based data analysis Fall 2017 Lecture Notes 5: Multiresolution Analysis 1 Frames A frame is a generalization of an orthonormal basis. The inner products between the vectors in a frame and

More information

DUE to its practical importance in communications, the

DUE to its practical importance in communications, the IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 3, MARCH 2005 149 An Analytical Formulation of Phase Noise of Signals With Gaussian-Distributed Jitter Reza Navid, Student Member,

More information

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University Lecture 4: Principal Component Analysis Aykut Erdem May 016 Hacettepe University This week Motivation PCA algorithms Applications PCA shortcomings Autoencoders Kernel PCA PCA Applications Data Visualization

More information

FORSCHUNGSZENTRUM JÜLICH GmbH Zentralinstitut für Angewandte Mathematik D Jülich, Tel. (02461)

FORSCHUNGSZENTRUM JÜLICH GmbH Zentralinstitut für Angewandte Mathematik D Jülich, Tel. (02461) FORSCHUNGSZENTRUM JÜLICH GmbH Zentralinstitut für Angewandte Mathematik D-52425 Jülich, Tel. (2461) 61-642 Interner Bericht Temporal and Spatial Prewhitening of Multi-Channel MEG Data Roland Beucker, Heidi

More information

Bayesian modelling of fmri time series

Bayesian modelling of fmri time series Bayesian modelling of fmri time series Pedro A. d. F. R. Højen-Sørensen, Lars K. Hansen and Carl Edward Rasmussen Department of Mathematical Modelling, Building 321 Technical University of Denmark DK-28

More information

Tutorial on Blind Source Separation and Independent Component Analysis

Tutorial on Blind Source Separation and Independent Component Analysis Tutorial on Blind Source Separation and Independent Component Analysis Lucas Parra Adaptive Image & Signal Processing Group Sarnoff Corporation February 09, 2002 Linear Mixtures... problem statement...

More information

Mathematical analysis of ultrafast ultrasound imaging

Mathematical analysis of ultrafast ultrasound imaging Mathematical analysis of ultrafast ultrasound imaging Giovanni S Alberti Department of Mathematics, University of Genoa Joint work with H. Ammari (ETH), F. Romero (ETH) and T. Wintz (Sony). IPMS 18, May

More information

Dimensionality Reduction

Dimensionality Reduction Lecture 5 1 Outline 1. Overview a) What is? b) Why? 2. Principal Component Analysis (PCA) a) Objectives b) Explaining variability c) SVD 3. Related approaches a) ICA b) Autoencoders 2 Example 1: Sportsball

More information

Frequency Resolution Effects on FRF Estimation: Cyclic Averaging vs. Large Block Size

Frequency Resolution Effects on FRF Estimation: Cyclic Averaging vs. Large Block Size Frequency Resolution Effects on FRF Estimation: Cyclic Averaging vs. Large Block Size Allyn W. Phillips, PhD Andrew. Zucker Randall J. Allemang, PhD Research Assistant Professor Research Assistant Professor

More information

Auto-correlation of retinal ganglion cell mosaics shows hexagonal structure

Auto-correlation of retinal ganglion cell mosaics shows hexagonal structure Supplementary Discussion Auto-correlation of retinal ganglion cell mosaics shows hexagonal structure Wässle and colleagues first observed that the local structure of cell mosaics was approximately hexagonal

More information

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017 Prof. John Paisley Department of Electrical Engineering & Data Science Institute Columbia University PRINCIPAL COMPONENT ANALYSIS DIMENSIONALITY

More information

Supplemental Information. Noise and Correlations. in Parallel Perceptual Decision Making. Thomas U. Otto and Pascal Mamassian

Supplemental Information. Noise and Correlations. in Parallel Perceptual Decision Making. Thomas U. Otto and Pascal Mamassian Current Biology, Volume 22 Supplemental Information Noise and Correlations in Parallel Perceptual Decision Making Thomas U. Otto and Pascal Mamassian Supplemental Inventory Figure S1 (related to Figure

More information

NON-NEGATIVE SPARSE CODING

NON-NEGATIVE SPARSE CODING NON-NEGATIVE SPARSE CODING Patrik O. Hoyer Neural Networks Research Centre Helsinki University of Technology P.O. Box 9800, FIN-02015 HUT, Finland patrik.hoyer@hut.fi To appear in: Neural Networks for

More information

Edge Detection in Computer Vision Systems

Edge Detection in Computer Vision Systems 1 CS332 Visual Processing in Computer and Biological Vision Systems Edge Detection in Computer Vision Systems This handout summarizes much of the material on the detection and description of intensity

More information

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY Volume 6, Issue 3, March 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Eigenface and

More information

What is Image Deblurring?

What is Image Deblurring? What is Image Deblurring? When we use a camera, we want the recorded image to be a faithful representation of the scene that we see but every image is more or less blurry, depending on the circumstances.

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/319/5869/1543/dc1 Supporting Online Material for Synaptic Theory of Working Memory Gianluigi Mongillo, Omri Barak, Misha Tsodyks* *To whom correspondence should be addressed.

More information

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39

Massoud BABAIE-ZADEH. Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Blind Source Separation (BSS) and Independent Componen Analysis (ICA) Massoud BABAIE-ZADEH Blind Source Separation (BSS) and Independent Componen Analysis (ICA) p.1/39 Outline Part I Part II Introduction

More information

V003 How Reliable Is Statistical Wavelet Estimation?

V003 How Reliable Is Statistical Wavelet Estimation? V003 How Reliable Is Statistical Wavelet Estimation? J.A. Edgar* (BG Group) & M. van der Baan (University of Alberta) SUMMARY Well logs are often used for the estimation of seismic wavelets. The phase

More information

Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video

Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video LETTER Communicated by Bruno Olshausen Simple-Cell-Like Receptive Fields Maximize Temporal Coherence in Natural Video Jarmo Hurri jarmo.hurri@hut.fi Aapo Hyvärinen aapo.hyvarinen@hut.fi Neural Networks

More information

Principal Component Analysis CS498

Principal Component Analysis CS498 Principal Component Analysis CS498 Today s lecture Adaptive Feature Extraction Principal Component Analysis How, why, when, which A dual goal Find a good representation The features part Reduce redundancy

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION doi:1.138/nature1878 I. Experimental setup OPA, DFG Ti:Sa Oscillator, Amplifier PD U DC U Analyzer HV Energy analyzer MCP PS CCD Polarizer UHV Figure S1: Experimental setup used in mid infrared photoemission

More information

Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations

Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Influence of Criticality on 1/f α Spectral Characteristics of Cortical Neuron Populations Robert Kozma rkozma@memphis.edu Computational Neurodynamics Laboratory, Department of Computer Science 373 Dunn

More information

Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs

Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Non-negative Matrix Factor Deconvolution; Extraction of Multiple Sound Sources from Monophonic Inputs Paris Smaragdis TR2004-104 September

More information

Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces

Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces LETTER Communicated by Bartlett Mel Emergence of Phase- and Shift-Invariant Features by Decomposition of Natural Images into Independent Feature Subspaces Aapo Hyvärinen Patrik Hoyer Helsinki University

More information

NON-LINEAR PARAMETER ESTIMATION USING VOLTERRA AND WIENER THEORIES

NON-LINEAR PARAMETER ESTIMATION USING VOLTERRA AND WIENER THEORIES Journal of Sound and Vibration (1999) 221(5), 85 821 Article No. jsvi.1998.1984, available online at http://www.idealibrary.com on NON-LINEAR PARAMETER ESTIMATION USING VOLTERRA AND WIENER THEORIES Department

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Anders Øland David Christiansen 1 Introduction Principal Component Analysis, or PCA, is a commonly used multi-purpose technique in data analysis. It can be used for feature

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

Adaptation to a 'spatial-frequency doubled' stimulus

Adaptation to a 'spatial-frequency doubled' stimulus Perception, 1980, volume 9, pages 523-528 Adaptation to a 'spatial-frequency doubled' stimulus Peter Thompson^!, Brian J Murphy Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

Dominant Feature Vectors Based Audio Similarity Measure

Dominant Feature Vectors Based Audio Similarity Measure Dominant Feature Vectors Based Audio Similarity Measure Jing Gu 1, Lie Lu 2, Rui Cai 3, Hong-Jiang Zhang 2, and Jian Yang 1 1 Dept. of Electronic Engineering, Tsinghua Univ., Beijing, 100084, China 2 Microsoft

More information

Impeller Fault Detection for a Centrifugal Pump Using Principal Component Analysis of Time Domain Vibration Features

Impeller Fault Detection for a Centrifugal Pump Using Principal Component Analysis of Time Domain Vibration Features Impeller Fault Detection for a Centrifugal Pump Using Principal Component Analysis of Time Domain Vibration Features Berli Kamiel 1,2, Gareth Forbes 2, Rodney Entwistle 2, Ilyas Mazhar 2 and Ian Howard

More information

Recipes for the Linear Analysis of EEG and applications

Recipes for the Linear Analysis of EEG and applications Recipes for the Linear Analysis of EEG and applications Paul Sajda Department of Biomedical Engineering Columbia University Can we read the brain non-invasively and in real-time? decoder 1001110 if YES

More information

Large Sample Properties of Estimators in the Classical Linear Regression Model

Large Sample Properties of Estimators in the Classical Linear Regression Model Large Sample Properties of Estimators in the Classical Linear Regression Model 7 October 004 A. Statement of the classical linear regression model The classical linear regression model can be written in

More information

Continuous Time Signal Analysis: the Fourier Transform. Lathi Chapter 4

Continuous Time Signal Analysis: the Fourier Transform. Lathi Chapter 4 Continuous Time Signal Analysis: the Fourier Transform Lathi Chapter 4 Topics Aperiodic signal representation by the Fourier integral (CTFT) Continuous-time Fourier transform Transforms of some useful

More information

Adaptive linear prediction filtering for random noise attenuation Mauricio D. Sacchi* and Mostafa Naghizadeh, University of Alberta

Adaptive linear prediction filtering for random noise attenuation Mauricio D. Sacchi* and Mostafa Naghizadeh, University of Alberta Adaptive linear prediction filtering for random noise attenuation Mauricio D. Sacchi* and Mostafa Naghizadeh, University of Alberta SUMMARY We propose an algorithm to compute time and space variant prediction

More information

Extended-time multi-taper frequency domain cross-correlation receiver function estimation

Extended-time multi-taper frequency domain cross-correlation receiver function estimation Extended-time multi-taper frequency domain cross-correlation receiver function estimation George Helffrich Earth Sciences, University of Bristol, Wills Mem. Bldg., Queen s Road, Bristol BS8 1RJ, UK Manuscript

More information

Time Series: Theory and Methods

Time Series: Theory and Methods Peter J. Brockwell Richard A. Davis Time Series: Theory and Methods Second Edition With 124 Illustrations Springer Contents Preface to the Second Edition Preface to the First Edition vn ix CHAPTER 1 Stationary

More information

An algorithm for detecting oscillatory behavior in discretized data: the damped-oscillator oscillator detector

An algorithm for detecting oscillatory behavior in discretized data: the damped-oscillator oscillator detector An algorithm for detecting oscillatory behavior in discretized data: the damped-oscillator oscillator detector David Hsu, Murielle Hsu, He Huang and Erwin B. Montgomery, Jr Department of Neurology University

More information

CHAPTER 2. Frequency Domain Analysis

CHAPTER 2. Frequency Domain Analysis FREQUENCY DOMAIN ANALYSIS 16 CHAPTER 2 Frequency Domain Analysis ASSESSMENTOF FREQUENCY DOMAIN FORCE IDENTIFICATION PROCEDURES CHAPTE,R 2. FREQUENCY DOMAINANALYSIS 17 2. FREQUENCY DOMAIN ANALYSIS The force

More information

AN EFFECTIVE FILTER FOR REMOVAL OF PRODUCTION ARTIFACTS IN U.S. GEOLOGICAL SURVEY 7.5-MINUTE DIGITAL ELEVATION MODELS* ABSTRACT 1.

AN EFFECTIVE FILTER FOR REMOVAL OF PRODUCTION ARTIFACTS IN U.S. GEOLOGICAL SURVEY 7.5-MINUTE DIGITAL ELEVATION MODELS* ABSTRACT 1. AN EFFECTIVE FILTER FOR REMOVAL OF PRODUCTION ARTIFACTS IN U.S. GEOLOGICAL SURVEY 7.5-MINUTE DIGITAL ELEVATION MODELS* Michael J. Oimoen Raytheon ITSS / USGS EROS Data Center Sioux Falls, South Dakota

More information

Eigenimaging for Facial Recognition

Eigenimaging for Facial Recognition Eigenimaging for Facial Recognition Aaron Kosmatin, Clayton Broman December 2, 21 Abstract The interest of this paper is Principal Component Analysis, specifically its area of application to facial recognition

More information

Mid Year Project Report: Statistical models of visual neurons

Mid Year Project Report: Statistical models of visual neurons Mid Year Project Report: Statistical models of visual neurons Anna Sotnikova asotniko@math.umd.edu Project Advisor: Prof. Daniel A. Butts dab@umd.edu Department of Biology Abstract Studying visual neurons

More information

Detection of Subsurface Defects using Active Infrared Thermography

Detection of Subsurface Defects using Active Infrared Thermography Detection of Subsurface Defects using Active Infrared Thermography More Info at Open Access Database www.ndt.net/?id=15141 Suman Tewary 1,2,a, Aparna Akula 1,2, Ripul Ghosh 1,2, Satish Kumar 2, H K Sardana

More information

Linear and Non-Linear Responses to Dynamic Broad-Band Spectra in Primary Auditory Cortex

Linear and Non-Linear Responses to Dynamic Broad-Band Spectra in Primary Auditory Cortex Linear and Non-Linear Responses to Dynamic Broad-Band Spectra in Primary Auditory Cortex D. J. Klein S. A. Shamma J. Z. Simon D. A. Depireux,2,2 2 Department of Electrical Engineering Supported in part

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION doi:10.1038/nature Table of contents 1. Concatenation vs. averaging 2 2. Sampling uniformity after SVD analysis 2 3. Emergence of reliable dynamical information 4. Characterization of approach by reference

More information

Wavelet Transform. Figure 1: Non stationary signal f(t) = sin(100 t 2 ).

Wavelet Transform. Figure 1: Non stationary signal f(t) = sin(100 t 2 ). Wavelet Transform Andreas Wichert Department of Informatics INESC-ID / IST - University of Lisboa Portugal andreas.wichert@tecnico.ulisboa.pt September 3, 0 Short Term Fourier Transform Signals whose frequency

More information

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen

TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES. Mika Inki and Aapo Hyvärinen TWO METHODS FOR ESTIMATING OVERCOMPLETE INDEPENDENT COMPONENT BASES Mika Inki and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O. Box 54, FIN-215 HUT, Finland ABSTRACT

More information

FGN 1.0 PPL 1.0 FD

FGN 1.0 PPL 1.0 FD FGN PPL FD f f Figure SDFs for FGN, PPL and FD processes (top to bottom rows, respectively) on both linear/log and log/log aes (left- and right-hand columns, respectively) Each SDF S X ( ) is normalized

More information