Application of Hopfield neural network for extracting Doppler spectrum from ocean echo

Size: px
Start display at page:

Download "Application of Hopfield neural network for extracting Doppler spectrum from ocean echo"

Transcription

1 RADIO SCIENCE, VOL. 41,, doi:10.109/005rs00334, 006 Application of Hopfield neural network for extracting Doppler spectrum from ocean echo Renzhuo Gui 1 and Zijie Yang 1 Received 5 July 005; revised 4 February 006; accepted 9 March 006; published 7 July 006. [1] This paper proposes the method of a Hopfield-type neural network (HNN) for extracting Doppler spectrum from ocean echo. First, it introduces the basic principle of HNN for optimized processing. Second, expanding the principle of utilizing autoregression (AR) to estimate frequency spectrum, we point out how to apply HNN in spectrum estimation. Last, the three methods are utilized to process actual data, that is, the conventional fast Fourier transform method, modern spectrum estimation AR method, and the spectrum estimation method based on HNN. The results obtained by the three methods prove that the spectrum estimation method based on HNN is feasible for extracting the Doppler spectrum from ocean echo. Citation: Gui, R., and Z. Yang (006), Application of Hopfield neural network for extracting Doppler spectrum from ocean echo, Radio Sci., 41,, doi:10.109/005rs Introduction [] High-frequency ground wave radar is widely exploited to detect ocean dynamic parameters and targets such as icebergs and vessels. At present, we are utilizing the system of frequency-modulated continuous wave (FMCW) in which we process ocean echo with dual fast Fourier transform (FFT). Range information is extracted through the first FFT. We acquire Doppler information of certain range cells with a second FFT which is used to process the results from the first FFT during certain coherent integration times. Thus information of ocean dynamic parameters and targets can be extracted. Generally, it takes an accumulation time about 13 min to sense ocean dynamic parameters, and it takes an accumulation time about 3 min to detect the target [Barrick et al., 1994; Knan and Mitchell, 1991]. The modern spectrum estimation autoregression (AR) method takes less accumulation time than conventional FFT because it requires less data. However, its computation is complex and consumes more time [Vizinho and Wyatt, 1996]. This paper proposes the spectrum estimation method based on a Hopfield-type neural network (HNN) that can overcome the disadvantage of modern spectrum estimation while preserving its advantage of taking less accumulation time. Moreover, because the spectrum estimation method based on HNN has the capabilities of parallel 1 School of Electronic Information, Wuhan University, Wuhan, China. Copyright 006 by the American Geophysical Union /06/005RS00334 processing and computation by iterative algorithm, its computation is not complex and does not consume much time [Ham and Kostanic, 003; Hopfield, 198].. HNN Basic Principles [3] The American physicist J. J. Hopfield published two influential papers concerning neural networks [Hopfield, 198; Hopfield and Tank, 1985]. He proposed a design which consists of interconnected devices and defined energy function which concerns the state and joining weight of neurons. HNN composed of three neurons is shown in Figure 1 for illuminating the structure of HNN. The first layer is treated as the input of the network; there are no neurons in the first layer which do not have the function of computation. However, the second layer, composed of true neurons, executes the function of accumulating the results which are produced by utilizing input information to multiply weight coefficients. Then the nonlinear transfer function of neurons is exploited to process accumulating results. Thus the output of HNN is acquired. Generally speaking, the model of HNN is a circular neural network because output is connected with input. Thus the input of HNN produces continuous state change. When the input of HNN is selected, the output of HNN can be acquired. Moreover, its feedback can change the value of the input. Thus new output is produced. The feedback can run continuously. If HNN is the network of stabilization, the change caused by feedback and iterative computation is smaller and smaller. HNN can reach the state of balance. At this time, the output of HNN is fixed. 1of6

2 Figure 1. Structure of HNN composed of three neurons. [4] Then HNN is widely applied in many fields such as the traveling salesman problem, pattern recognition, and optimization control. It can be utilized to solve the problems of association memory and optimization computation. The output of neural networks can be either continuous or discrete. Correspondingly, HNN can be divided into two kinds, that is, continuous and discrete. Continuous HNN can solve the problem of optimization computation. Discrete HNN can solve the problem of association memory. Continuous HNN is a singlelayered feedback network. The operation mode of each neuron is shown as the following formula: du i dt ¼ 1 t u i þ XM j¼1 W ij X j þ I i ; ð1þ where X i = F(u i ), i =1,,, N, W ij = W ji, N is the number of neurons, u i is the middle output of the ith neuron, I i is the threshold value of the ith neuron, and F() is the transfer function, which is usually a sigmoid function. In fact, continuous HNN is the continuous system of nonlinear dynamics; it can be described with one group of nonlinear equations. When the initial state is determined, we can compute the trace of the network by solving the nonlinear differential equations. If the system is steady, it will converge to a certain steady point. We can define one Lyapunov energy function for the description of continuous HNN: X N X M E ¼ 1 W ij X i X j XN j¼1 Z ui f 1 ðhþdh: 0 X i I i þ XN 1 t ðþ We can prove that the energy function of HNN is limited. The formula de/dt 0 can demonstrate that the system is steady. The state of network always changes with the tendency of making E decrease until E reaches minimum. Meanwhile, X i, namely, the steady value of the ith neuron, is constant. Therefore HNN has the function of computing minimum automatically. As has been discussed above, continuous HNN can be utilized to solve the optimization problem [Ham and Kostanic, 003; Hopfield, 198]. 3. HNN Design [5] At first, we will briefly describe AR spectrum estimation. One pth-order AR model is presumed to satisfy the difference equation xn ð Þþa 1 xn ð 1Þþþa p xn ð pþ ¼ eðnþ: ð3þ of6

3 Here a 1, a,, a p are constant, a p 6¼ 0, and e(n) indicates the noise sequence whose mean and variance are e(n) and s e, respectively. We define that the predicted value ^x(n) is computed from parameters a 1, a,, a p. Then the difference exists and can be described by the equation eðnþ ¼^xðÞþ n Xp a i xn ð iþ: ð4þ We realize the AR spectrum estimation by estimating parameters a 1, a,, a p, which are produced by making E[e (n)] reach minimum E[e (n)] min. More concrete details about how to solve the Yule-Walker equation are illuminated as follows: 3 3 r x ð0þ r x ð1þ r x ðpþ 1 r x ð1þ r x ð0þ r x ðp 1Þ að1þ r x ðpþ r x ðp 1Þ r x ð0þ aðpþ E e 3 ½ ðnþš min 0 ¼ : ð5þ 0 Then we acquire the values of coefficients a 1, a,, a p and E[e (n)] min. Here r x (m) = E[x(n) x(n + m)]. According to the equation ^P x ðwþ ¼ E½e ðnþš min 1 þ Xp ; ð6þ a k e jwk k¼1 we can estimate the signal frequency spectrum [Cristi, 003]. Here E[e (n)] min is the variance of white noise in the AR model. We pay more attention to coefficients a 1, a,, a p. E[e (n)] min should not be neglected, so we are not normalizing by giving it the value 1. Thus there is the unitary equation ^P x ðwþ ¼ 1 þ Xp k¼1 1 ð7þ a k e jwk that can be utilized to realize spectrum estimation [Kay and Marple, 1981]. [6] In order to utilize the neural network to solve the optimization problem, we must set up the relationship between the neural network and the problem. In this section, we illuminate how to set up the relationship between HNN and the problem of acquiring parameters a 1, a,, a p by minimizing E[e (n)]. E[e (n)] can be expressed as the equation ( ) 3 E e ðnþ X p ¼ E 4 x ð n Þ a i xn ð iþ 5: ð8þ It is not difficult to discover that we can estimate AR model parameters a 1, a,, a p by minimizing the formula P N n¼pþ1 {x(n) Pp a i x(n i)}. Here N is the number of data that are used to estimate the spectrum. Moreover, the formula PN n¼pþ1 {x(n) Pp a i x(n i)} can be expressed as the following matrix norm form: kb XAk ; ð9þ where B =[x(p + 1), x(p + ),, x(n)] t, X =[X 1, X,, X P ] X 1 ¼ ½xðpÞ; xðpþ 1Þ; ; xn ð 1Þ X ¼ ½xðp 1Þ; xðpþ; ; xn ð Þ... X p ¼ ½xðÞ; 1 xðþ; ; xn ð pþš t t: A ¼ a 1 ; a ; ; a p We unfold equation (9) to acquire the equation kb XA k ¼ Bt B B t XA ðxaþ t B þ ðxaþ t XA: ð10þ B t B is neglected because it is irrelevant to parameters a 1, a,, a p. We can transform the right-hand part of equation (10) into the polynomial Pp P p X t i X j a i a j Pp j¼1 B t X i a i. By comparing it with the energy function of HNN, we can easily set up the relationship between them. That is to say, the joining weight between the ith neuron and the jth neuron is expressed as W ij = X i t X j, and the threshold value of the ith neuron is computed through the formula I i = B t X i. According to the relation between the AR method and the HNN method, the number of neurons in the HNN method corresponds to the number of order in the AR method. A sigmoid function whose output is always positive cannot be selected as the transfer function F() because the output value of neurons in HNN can be either positive or negative. The rule which can decide the selection of transfer function F() is that the derivative of transfer function F() cannot be negative [Minsky and Papert, 1988]. Here we select a special purelin function [Hopfield and Tank, 1985] v i = u i /b, which is specially designed for the problem in the paper. The value of b is selected according to the weight and threshold values of Š t Š t 3of6

4 Figure. Processing results using the conventional FFT method in the middle range for the detection of ocean dynamic parameters. HNN. A suitable one can make the network converge to minimum quickly. The value of b is 40 in my simulating experiment. [7] HNN usually has the problem of local minimum, which may interfere with its converging to global minimum and may produce wrong results. However, the equation in this paper for computing the minimum is a simple quadratic polynomial without any constraint, so it has only one global minimum without local minimum. Figure 4. Processing results using the spectrum estimation method based on HNN in the middle range for the detection of ocean dynamic parameters. As a result, it is impossible for the network of HNN to converge to an incorrect local minimum. 4. Application and Comparison [8] In this section, we want to prove the feasibility of the spectrum estimation method based on HNN. The three methods, that is, the conventional FFT method, the modern spectrum estimation AR method, and the spectrum estimation method based on HNN, are utilized to process the same actual data. Then we compare processing results. Figure 3. Processing results using the AR spectrum estimation method in the middle range for the detection of ocean dynamic parameters. Figure 5. Processing results using the conventional FFT method for target detection. 4of6

5 [9] At first, we select actual data which come from the Radio Propagation Lab in Wuhan University to extract ocean dynamic parameters at 0645 UT on 13 April 004 on the Zhujiajian Island of Zhejiang province in China. The radar parameters are set as follows. The carrying frequency is MHz, and range resolution is.5 km. The sweep period is s, and coherent integration time is about 13 min. The number of range cells is 8. The number of data is 104. The same data are processed by the three methods. The FFT method utilizes the whole data, and the AR and HNN methods utilize the data whose number is 56. The length of the slide window is 19. The number of moving slide windows is 64 in the AR and HNN methods. The order of the AR method is 64, and the number of neurons in HNN is also 64. The results of processing with the three methods are shown in Figures, 3, and 4, respectively. [10] From Figures 4, it can be seen that the spectrum estimation method based on HNN can produce a satisfactory processing result in the detection of ocean dynamic parameters. The region of the first-order echo spectrum produced by the spectrum estimation method based on HNN is consistent with the regions produced by the other two methods; moreover, the signal noise ratio (SNR) is about 35 db, which is higher than the SNR of the other two methods, which is 0 db. [11] Then we consider the application of the HNNbased spectrum estimation method in the field of detecting the target. Actual data from the Radio Propagation Lab in Wuhan University are selected for extracting information on the target at 005 UT on 19 April 004 on the Zhujiajian Island of Zhejiang province in China. The radar parameters are set as follows. The carrying frequency is 7.98 MHz, and range resolution is 1.5 km. The sweep period is s, and coherent integration Figure 6. Processing results using the AR spectrum estimation method for target detection. Figure 7. Processing results using the spectrum estimation method based on HNN for target detection. time is about 3 min. The number of range cells is 44. The order of the AR method is 64, and the number of neurons in HNN is also 64. The results of processing by the three methods are shown in Figures 5, 6, and 7, respectively. [1] The spectrum estimation method based on HNN can be utilized to detect the target as the other two methods can. The SNR is 30 db, which is higher than the 0 db of the other two methods. Because the number of neurons is consistent with the order of the AR method, the Akaike information criterion rule that is utilized to ascertain the order of the AR method [Kashyap, 1980] can be also exploited to ascertain the number of neurons. However, the drawback-caused spectrum spread in the method based on HNN should be considered in future research. We can select the middle value in the region of the first-order spectrum to overcome the disadvantage. 5. Conclusions [13] We have shown that we can apply HNN to extract the Doppler spectrum from ocean echo. The spectrum estimation method based on HNN has two advantages. Like modern spectrum estimation, it reduces the accumulation time because less data are required. Moreover, it has the capabilities of parallel processing and computation by iterative algorithm that allow it to avoid the complex and large computation which appears in AR spectrum estimation. The results of processing actual data indicate that the spectrum estimation method based on HNN may be feasible for the extraction of Doppler spectrum from ocean echo. 5of6

6 [14] Acknowledgment. This work was supported by the 863 High Technology Project of China (001AA631050). References Barrick, D. E., et al. (1994), Gated FMCW DF radar and signal processing for range/doppler/angle determination, Patent , U.S. Patent and Trademark Off., Washington, D. C. Cristi, R. (003), Modern Digital Signal Processing, China Machine Press, Beijing. Ham, F. M., and I. Kostanic (003), Principles of Neurocomputing for Science and Engineering, China Machine Press, Beijing. Hopfield, J. J. (198), Neural networks and physical systems with emergent collective computational geometry, Proc. Natl. Acad. Sci. U.S.A., 79, Hopfield, J. J., and D. W. Tank (1985), Neural computation of decisions in optimization problems, Biol. Cybern., 5, Kashyap, R. L. (1980), Inconsistency of the AIC rule for estimating the order of autoregressive models, IEEE Trans. Autom. Control, 5(5), Kay, S. M., and S. L. Marple (1981), Spectrum analysis: A modern perspective, Proc. IEEE, 69, Knan, R. H., and D. K. Mitchell (1991), Waveform analysis for high-frequency FMICW radar, IEE Proc. Part F Radar Signal Process., 138(5), Minsky, M. L., and S. A. Papert (1988), Perceptrons, expanded ed., MIT Press, Cambridge, New York. Vizinho, A., and L. R. Wyatt (1996), Modern spectral analysis in HF radar remote sensing, in Oceans 96 MTS/IEEE: Coastal Ocean, Prospects for the 1st Century: Conference Proceedings, vol. 3, pp , IEEE Press, Piscataway, N. J. R. Gui and Z. Yang, School of Electronic Information, Wuhan University, Wuhan, , China. (rzgui@16.com) 6of6

IN THIS PAPER, we consider a class of continuous-time recurrent

IN THIS PAPER, we consider a class of continuous-time recurrent IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 51, NO. 4, APRIL 2004 161 Global Output Convergence of a Class of Continuous-Time Recurrent Neural Networks With Time-Varying Thresholds

More information

HOPFIELD neural networks (HNNs) are a class of nonlinear

HOPFIELD neural networks (HNNs) are a class of nonlinear IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 52, NO. 4, APRIL 2005 213 Stochastic Noise Process Enhancement of Hopfield Neural Networks Vladimir Pavlović, Member, IEEE, Dan Schonfeld,

More information

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence CS 4700: Foundations of Artificial Intelligence Prof. Bart Selman selman@cs.cornell.edu Machine Learning: Neural Networks R&N 18.7 Intro & perceptron learning 1 2 Neuron: How the brain works # neurons

More information

A Hybrid Time-delay Prediction Method for Networked Control System

A Hybrid Time-delay Prediction Method for Networked Control System International Journal of Automation and Computing 11(1), February 2014, 19-24 DOI: 10.1007/s11633-014-0761-1 A Hybrid Time-delay Prediction Method for Networked Control System Zhong-Da Tian Xian-Wen Gao

More information

FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS APPLICATION TO MEDICAL IMAGE ANALYSIS OF LIVER CANCER. Tadashi Kondo and Junji Ueno

FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS APPLICATION TO MEDICAL IMAGE ANALYSIS OF LIVER CANCER. Tadashi Kondo and Junji Ueno International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 3(B), March 2012 pp. 2285 2300 FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS

More information

Learning and Memory in Neural Networks

Learning and Memory in Neural Networks Learning and Memory in Neural Networks Guy Billings, Neuroinformatics Doctoral Training Centre, The School of Informatics, The University of Edinburgh, UK. Neural networks consist of computational units

More information

Pseudo-seismic wavelet transformation of transient electromagnetic response in engineering geology exploration

Pseudo-seismic wavelet transformation of transient electromagnetic response in engineering geology exploration GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L645, doi:.29/27gl36, 27 Pseudo-seismic wavelet transformation of transient electromagnetic response in engineering geology exploration G. Q. Xue, Y. J. Yan, 2 and

More information

On the SIR s ( Signal -to- Interference -Ratio) in. Discrete-Time Autonomous Linear Networks

On the SIR s ( Signal -to- Interference -Ratio) in. Discrete-Time Autonomous Linear Networks On the SIR s ( Signal -to- Interference -Ratio) in arxiv:93.9v [physics.data-an] 9 Mar 9 Discrete-Time Autonomous Linear Networks Zekeriya Uykan Abstract In this letter, we improve the results in [5] by

More information

Optimization of machining parameters of Wire-EDM based on Grey relational and statistical analyses

Optimization of machining parameters of Wire-EDM based on Grey relational and statistical analyses int. j. prod. res., 2003, vol. 41, no. 8, 1707 1720 Optimization of machining parameters of Wire-EDM based on Grey relational and statistical analyses J. T. HUANG{* and Y. S. LIAO{ Grey relational analyses

More information

Analysis of methods for speech signals quantization

Analysis of methods for speech signals quantization INFOTEH-JAHORINA Vol. 14, March 2015. Analysis of methods for speech signals quantization Stefan Stojkov Mihajlo Pupin Institute, University of Belgrade Belgrade, Serbia e-mail: stefan.stojkov@pupin.rs

More information

CS 4700: Foundations of Artificial Intelligence

CS 4700: Foundations of Artificial Intelligence CS 4700: Foundations of Artificial Intelligence Prof. Bart Selman selman@cs.cornell.edu Machine Learning: Neural Networks R&N 18.7 Intro & perceptron learning 1 2 Neuron: How the brain works # neurons

More information

Lab 5: 16 th April Exercises on Neural Networks

Lab 5: 16 th April Exercises on Neural Networks Lab 5: 16 th April 01 Exercises on Neural Networks 1. What are the values of weights w 0, w 1, and w for the perceptron whose decision surface is illustrated in the figure? Assume the surface crosses the

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Worst-Case Analysis of the Perceptron and Exponentiated Update Algorithms

Worst-Case Analysis of the Perceptron and Exponentiated Update Algorithms Worst-Case Analysis of the Perceptron and Exponentiated Update Algorithms Tom Bylander Division of Computer Science The University of Texas at San Antonio San Antonio, Texas 7849 bylander@cs.utsa.edu April

More information

Computational Intelligence Lecture 6: Associative Memory

Computational Intelligence Lecture 6: Associative Memory Computational Intelligence Lecture 6: Associative Memory Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Fall 2011 Farzaneh Abdollahi Computational Intelligence

More information

Determining the Optimal Decision Delay Parameter for a Linear Equalizer

Determining the Optimal Decision Delay Parameter for a Linear Equalizer International Journal of Automation and Computing 1 (2005) 20-24 Determining the Optimal Decision Delay Parameter for a Linear Equalizer Eng Siong Chng School of Computer Engineering, Nanyang Technological

More information

Artificial Neural Networks Examination, June 2004

Artificial Neural Networks Examination, June 2004 Artificial Neural Networks Examination, June 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) For this week, Reading Chapter 4: Neural Networks (Mitchell, 1997) See Canvas For subsequent weeks: Scaling Learning Algorithms toward

More information

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation

Plan. Perceptron Linear discriminant. Associative memories Hopfield networks Chaotic networks. Multilayer perceptron Backpropagation Neural Networks Plan Perceptron Linear discriminant Associative memories Hopfield networks Chaotic networks Multilayer perceptron Backpropagation Perceptron Historically, the first neural net Inspired

More information

Nonparametric Rotational Motion Compensation Technique for High-Resolution ISAR Imaging via Golden Section Search

Nonparametric Rotational Motion Compensation Technique for High-Resolution ISAR Imaging via Golden Section Search Progress In Electromagnetics Research M, Vol. 36, 67 76, 14 Nonparametric Rotational Motion Compensation Technique for High-Resolution ISAR Imaging via Golden Section Search Yang Liu *, Jiangwei Zou, Shiyou

More information

Machine Learning. Neural Networks

Machine Learning. Neural Networks Machine Learning Neural Networks Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 Biological Analogy Bryan Pardo, Northwestern University, Machine Learning EECS 349 Fall 2007 THE

More information

Lecture 4: Feed Forward Neural Networks

Lecture 4: Feed Forward Neural Networks Lecture 4: Feed Forward Neural Networks Dr. Roman V Belavkin Middlesex University BIS4435 Biological neurons and the brain A Model of A Single Neuron Neurons as data-driven models Neural Networks Training

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE 4: Linear Systems Summary # 3: Introduction to artificial neural networks DISTRIBUTED REPRESENTATION An ANN consists of simple processing units communicating with each other. The basic elements of

More information

CS:4420 Artificial Intelligence

CS:4420 Artificial Intelligence CS:4420 Artificial Intelligence Spring 2018 Neural Networks Cesare Tinelli The University of Iowa Copyright 2004 18, Cesare Tinelli and Stuart Russell a a These notes were originally developed by Stuart

More information

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others) Machine Learning Neural Networks (slides from Domingos, Pardo, others) Human Brain Neurons Input-Output Transformation Input Spikes Output Spike Spike (= a brief pulse) (Excitatory Post-Synaptic Potential)

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks 鮑興國 Ph.D. National Taiwan University of Science and Technology Outline Perceptrons Gradient descent Multi-layer networks Backpropagation Hidden layer representations Examples

More information

Modified Hopfield Neural Network Approach for Solving Nonlinear Algebraic Equations

Modified Hopfield Neural Network Approach for Solving Nonlinear Algebraic Equations Engineering Letters, 14:1, EL_14_1_3 (Advance online publication: 1 February 007) Modified Hopfield Neural Network Approach for Solving Nonlinear Algebraic Equations {Deepak Mishra, Prem K. Kalra} Abstract

More information

Neural networks. Chapter 20, Section 5 1

Neural networks. Chapter 20, Section 5 1 Neural networks Chapter 20, Section 5 Chapter 20, Section 5 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 20, Section 5 2 Brains 0 neurons of

More information

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA 1/ 21

Neural Networks. Chapter 18, Section 7. TB Artificial Intelligence. Slides from AIMA   1/ 21 Neural Networks Chapter 8, Section 7 TB Artificial Intelligence Slides from AIMA http://aima.cs.berkeley.edu / 2 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural

More information

Multi-Layer Perceptrons for Functional Data Analysis: a Projection Based Approach 0

Multi-Layer Perceptrons for Functional Data Analysis: a Projection Based Approach 0 Multi-Layer Perceptrons for Functional Data Analysis: a Projection Based Approach 0 Brieuc Conan-Guez 1 and Fabrice Rossi 23 1 INRIA, Domaine de Voluceau, Rocquencourt, B.P. 105 78153 Le Chesnay Cedex,

More information

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm

Keywords- Source coding, Huffman encoding, Artificial neural network, Multilayer perceptron, Backpropagation algorithm Volume 4, Issue 5, May 2014 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Huffman Encoding

More information

Radar Signal Intra-Pulse Feature Extraction Based on Improved Wavelet Transform Algorithm

Radar Signal Intra-Pulse Feature Extraction Based on Improved Wavelet Transform Algorithm Int. J. Communications, Network and System Sciences, 017, 10, 118-17 http://www.scirp.org/journal/ijcns ISSN Online: 1913-373 ISSN Print: 1913-3715 Radar Signal Intra-Pulse Feature Extraction Based on

More information

Analogue Quantum Computers for Data Analysis

Analogue Quantum Computers for Data Analysis Analogue Quantum Computers for Data Analysis Alexander Yu. Vlasov Federal Center for Radiology, IRH 197101, Mira Street 8, St. Petersburg, Russia 29 December 1997 11 February 1998 quant-ph/9802028 Abstract

More information

While using the input and output data fu(t)g and fy(t)g, by the methods in system identification, we can get a black-box model like (In the case where

While using the input and output data fu(t)g and fy(t)g, by the methods in system identification, we can get a black-box model like (In the case where ESTIMATE PHYSICAL PARAMETERS BY BLACK-BOX MODELING Liang-Liang Xie Λ;1 and Lennart Ljung ΛΛ Λ Institute of Systems Science, Chinese Academy of Sciences, 100080, Beijing, China ΛΛ Department of Electrical

More information

Research Article Doppler Velocity Estimation of Overlapping Linear-Period-Modulated Ultrasonic Waves Based on an Expectation-Maximization Algorithm

Research Article Doppler Velocity Estimation of Overlapping Linear-Period-Modulated Ultrasonic Waves Based on an Expectation-Maximization Algorithm Advances in Acoustics and Vibration, Article ID 9876, 7 pages http://dx.doi.org/.55//9876 Research Article Doppler Velocity Estimation of Overlapping Linear-Period-Modulated Ultrasonic Waves Based on an

More information

Chapter 2 Simplicity in the Universe of Cellular Automata

Chapter 2 Simplicity in the Universe of Cellular Automata Chapter 2 Simplicity in the Universe of Cellular Automata Because of their simplicity, rules of cellular automata can easily be understood. In a very simple version, we consider two-state one-dimensional

More information

A Fast Algorithm for Multi-component LFM Signal Analysis Exploiting Segmented DPT and SDFrFT

A Fast Algorithm for Multi-component LFM Signal Analysis Exploiting Segmented DPT and SDFrFT A Fast Algorithm for Multi-component LFM Signal Analysis Exploiting Segmented DPT and SDFrFT Shengheng Liu, Tao Shan, Yimin D. Zhang, Ran Tao, and Yuan Feng School of Information and Electronics, Beijing

More information

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels Need for Deep Networks Perceptron Can only model linear functions Kernel Machines Non-linearity provided by kernels Need to design appropriate kernels (possibly selecting from a set, i.e. kernel learning)

More information

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis Introduction to Natural Computation Lecture 9 Multilayer Perceptrons and Backpropagation Peter Lewis 1 / 25 Overview of the Lecture Why multilayer perceptrons? Some applications of multilayer perceptrons.

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks What are (Artificial) Neural Networks? Models of the brain and nervous system Highly parallel Process information much more like the brain than a serial computer Learning

More information

Terminal attractor optical associative memory with adaptive control parameter

Terminal attractor optical associative memory with adaptive control parameter 1 June 1998 Ž. Optics Communications 151 1998 353 365 Full length article Terminal attractor optical associative memory with adaptive control parameter Xin Lin a,), Junji Ohtsubo b, Masahiko Mori a a Electrotechnical

More information

N-bit Parity Neural Networks with minimum number of threshold neurons

N-bit Parity Neural Networks with minimum number of threshold neurons Open Eng. 2016; 6:309 313 Research Article Open Access Marat Z. Arslanov*, Zhazira E. Amirgalieva, and Chingiz A. Kenshimov N-bit Parity Neural Netorks ith minimum number of threshold neurons DOI 10.1515/eng-2016-0037

More information

Quantum Neural Network

Quantum Neural Network Quantum Neural Network - Optical Neural Networks operating at the Quantum Limit - Preface We describe the basic concepts, operational principles and expected performance of a novel computing machine, quantum

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau

Last update: October 26, Neural networks. CMSC 421: Section Dana Nau Last update: October 26, 207 Neural networks CMSC 42: Section 8.7 Dana Nau Outline Applications of neural networks Brains Neural network units Perceptrons Multilayer perceptrons 2 Example Applications

More information

A Low-Error Statistical Fixed-Width Multiplier and Its Applications

A Low-Error Statistical Fixed-Width Multiplier and Its Applications A Low-Error Statistical Fixed-Width Multiplier and Its Applications Yuan-Ho Chen 1, Chih-Wen Lu 1, Hsin-Chen Chiang, Tsin-Yuan Chang, and Chin Hsia 3 1 Department of Engineering and System Science, National

More information

Neural networks. Chapter 19, Sections 1 5 1

Neural networks. Chapter 19, Sections 1 5 1 Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10

More information

Coherent imaging without phases

Coherent imaging without phases Coherent imaging without phases Miguel Moscoso Joint work with Alexei Novikov Chrysoula Tsogka and George Papanicolaou Waves and Imaging in Random Media, September 2017 Outline 1 The phase retrieval problem

More information

Hopfield Neural Network

Hopfield Neural Network Lecture 4 Hopfield Neural Network Hopfield Neural Network A Hopfield net is a form of recurrent artificial neural network invented by John Hopfield. Hopfield nets serve as content-addressable memory systems

More information

Introduction to Artificial Neural Networks

Introduction to Artificial Neural Networks Facultés Universitaires Notre-Dame de la Paix 27 March 2007 Outline 1 Introduction 2 Fundamentals Biological neuron Artificial neuron Artificial Neural Network Outline 3 Single-layer ANN Perceptron Adaline

More information

Neural Networks Lecture 6: Associative Memory II

Neural Networks Lecture 6: Associative Memory II Neural Networks Lecture 6: Associative Memory II H.A Talebi Farzaneh Abdollahi Department of Electrical Engineering Amirkabir University of Technology Winter 2011. A. Talebi, Farzaneh Abdollahi Neural

More information

DEPARTMENT OF INFORMATION AND COMMUNICATION TECHNOLOGY

DEPARTMENT OF INFORMATION AND COMMUNICATION TECHNOLOGY UNIVERSITY OF TRENTO DEPARTMENT OF INFORMATION AND COMMUNICATION TECHNOLOGY 38050 Povo Trento Italy, Via Sommarive 14 http://www.dit.unitn.it A RISKS ASSESSMENT AND CONFORMANCE TESTING OF ANALOG-TO-DIGITAL

More information

GMDH-type Neural Networks with a Feedback Loop and their Application to the Identification of Large-spatial Air Pollution Patterns.

GMDH-type Neural Networks with a Feedback Loop and their Application to the Identification of Large-spatial Air Pollution Patterns. GMDH-type Neural Networks with a Feedback Loop and their Application to the Identification of Large-spatial Air Pollution Patterns. Tadashi Kondo 1 and Abhijit S.Pandya 2 1 School of Medical Sci.,The Univ.of

More information

On Moving Average Parameter Estimation

On Moving Average Parameter Estimation On Moving Average Parameter Estimation Niclas Sandgren and Petre Stoica Contact information: niclas.sandgren@it.uu.se, tel: +46 8 473392 Abstract Estimation of the autoregressive moving average (ARMA)

More information

4. Multilayer Perceptrons

4. Multilayer Perceptrons 4. Multilayer Perceptrons This is a supervised error-correction learning algorithm. 1 4.1 Introduction A multilayer feedforward network consists of an input layer, one or more hidden layers, and an output

More information

LMI based Stability criteria for 2-D PSV system described by FM-2 Model

LMI based Stability criteria for 2-D PSV system described by FM-2 Model Vol-4 Issue-018 LMI based Stability criteria for -D PSV system described by FM- Model Prashant K Shah Department of Electronics Engineering SVNIT, pks@eced.svnit.ac.in Abstract Stability analysis is the

More information

Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network

Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network LETTER Communicated by Geoffrey Hinton Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network Xiaohui Xie xhx@ai.mit.edu Department of Brain and Cognitive Sciences, Massachusetts

More information

Overview of Beamforming

Overview of Beamforming Overview of Beamforming Arye Nehorai Preston M. Green Department of Electrical and Systems Engineering Washington University in St. Louis March 14, 2012 CSSIP Lab 1 Outline Introduction Spatial and temporal

More information

3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield

3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield 3.3 Discrete Hopfield Net An iterative autoassociative net similar to the nets described in the previous sections has been developed by Hopfield (1982, 1984). - The net is a fully interconnected neural

More information

Long-Short Term Memory and Other Gated RNNs

Long-Short Term Memory and Other Gated RNNs Long-Short Term Memory and Other Gated RNNs Sargur Srihari srihari@buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Sequence Modeling

More information

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD WHAT IS A NEURAL NETWORK? The simplest definition of a neural network, more properly referred to as an 'artificial' neural network (ANN), is provided

More information

Address for Correspondence

Address for Correspondence Research Article APPLICATION OF ARTIFICIAL NEURAL NETWORK FOR INTERFERENCE STUDIES OF LOW-RISE BUILDINGS 1 Narayan K*, 2 Gairola A Address for Correspondence 1 Associate Professor, Department of Civil

More information

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units Connectionist Models Consider humans: Neuron switching time ~ :001 second Number of neurons ~ 10 10 Connections per neuron ~ 10 4 5 Scene recognition time ~ :1 second 100 inference steps doesn't seem like

More information

Ultrasonic Measurement of Minute Displacement of Object Cyclically Actuated by Acoustic Radiation Force

Ultrasonic Measurement of Minute Displacement of Object Cyclically Actuated by Acoustic Radiation Force Jpn. J. Appl. Phys. Vol. 42 (2003) pp. 4608 4612 Part 1, No. 7A, July 2003 #2003 The Japan Society of Applied Physics Ultrasonic Measurement of Minute Displacement of Object Cyclically Actuated by Acoustic

More information

T Machine Learning and Neural Networks

T Machine Learning and Neural Networks T-61.5130 Machine Learning and Neural Networks (5 cr) Lecture 11: Processing of Temporal Information Prof. Juha Karhunen https://mycourses.aalto.fi/ Aalto University School of Science, Espoo, Finland 1

More information

Data-driven methods in application to flood defence systems monitoring and analysis Pyayt, A.

Data-driven methods in application to flood defence systems monitoring and analysis Pyayt, A. UvA-DARE (Digital Academic Repository) Data-driven methods in application to flood defence systems monitoring and analysis Pyayt, A. Link to publication Citation for published version (APA): Pyayt, A.

More information

Detection in reverberation using space time adaptive prewhiteners

Detection in reverberation using space time adaptive prewhiteners Detection in reverberation using space time adaptive prewhiteners Wei Li,,2 Xiaochuan Ma, Yun Zhu, Jun Yang,,2 and Chaohuan Hou Institute of Acoustics, Chinese Academy of Sciences 2 Graduate University

More information

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH Abstract POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH A.H.M.A.Rahim S.K.Chakravarthy Department of Electrical Engineering K.F. University of Petroleum and Minerals Dhahran. Dynamic

More information

Generating hyperchaotic Lu attractor via state feedback control

Generating hyperchaotic Lu attractor via state feedback control Physica A 364 (06) 3 1 www.elsevier.com/locate/physa Generating hyperchaotic Lu attractor via state feedback control Aimin Chen a, Junan Lu a, Jinhu Lu b,, Simin Yu c a College of Mathematics and Statistics,

More information

Optimal XOR based (2,n)-Visual Cryptography Schemes

Optimal XOR based (2,n)-Visual Cryptography Schemes Optimal XOR based (2,n)-Visual Cryptography Schemes Feng Liu and ChuanKun Wu State Key Laboratory Of Information Security, Institute of Software Chinese Academy of Sciences, Beijing 0090, China Email:

More information

EE 230 Lecture 43. Data Converters

EE 230 Lecture 43. Data Converters EE 230 Lecture 43 Data Converters Review from Last Time: Amplitude Quantization Unwanted signals in the output of a system are called noise. Distortion Smooth nonlinearities Frequency attenuation Large

More information

Artificial Neural Networks Examination, June 2005

Artificial Neural Networks Examination, June 2005 Artificial Neural Networks Examination, June 2005 Instructions There are SIXTY questions. (The pass mark is 30 out of 60). For each question, please select a maximum of ONE of the given answers (either

More information

ARMA SPECTRAL ESTIMATION BY AN ADAPTIVE IIR FILTER. by JIANDE CHEN, JOOS VANDEWALLE, and BART DE MOOR4

ARMA SPECTRAL ESTIMATION BY AN ADAPTIVE IIR FILTER. by JIANDE CHEN, JOOS VANDEWALLE, and BART DE MOOR4 560 R. BRU AND J. VITbRIA REFERENCES Y. H. Au-Yeung and Y. T. Poon, 3X3 orthostochastic matrices and the convexity of generalized numerical ranges, Linear Algebra Appl. 27:69-79 (1979). N. Bebiano, Some

More information

2D Spectrogram Filter for Single Channel Speech Enhancement

2D Spectrogram Filter for Single Channel Speech Enhancement Proceedings of the 7th WSEAS International Conference on Signal, Speech and Image Processing, Beijing, China, September 15-17, 007 89 D Spectrogram Filter for Single Channel Speech Enhancement HUIJUN DING,

More information

ELECTROMAGNETIC FIELD OF A HORIZONTAL IN- FINITELY LONG MAGNETIC LINE SOURCE OVER THE EARTH COATED WITH A DIELECTRIC LAYER

ELECTROMAGNETIC FIELD OF A HORIZONTAL IN- FINITELY LONG MAGNETIC LINE SOURCE OVER THE EARTH COATED WITH A DIELECTRIC LAYER Progress In Electromagnetics Research Letters, Vol. 31, 55 64, 2012 ELECTROMAGNETIC FIELD OF A HORIZONTAL IN- FINITELY LONG MAGNETIC LINE SOURCE OVER THE EARTH COATED WITH A DIELECTRIC LAYER Y.-J. Zhi

More information

3.4 Linear Least-Squares Filter

3.4 Linear Least-Squares Filter X(n) = [x(1), x(2),..., x(n)] T 1 3.4 Linear Least-Squares Filter Two characteristics of linear least-squares filter: 1. The filter is built around a single linear neuron. 2. The cost function is the sum

More information

Using Neural Networks for Identification and Control of Systems

Using Neural Networks for Identification and Control of Systems Using Neural Networks for Identification and Control of Systems Jhonatam Cordeiro Department of Industrial and Systems Engineering North Carolina A&T State University, Greensboro, NC 27411 jcrodrig@aggies.ncat.edu

More information

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware

Quasi Analog Formal Neuron and Its Learning Algorithm Hardware Quasi Analog Formal Neuron and Its Learning Algorithm Hardware Karen Nazaryan Division of Microelectronics and Biomedical Devices, State Engineering University of Armenia, 375009, Terian Str. 105, Yerevan,

More information

Machine Learning

Machine Learning Machine Learning 10-315 Maria Florina Balcan Machine Learning Department Carnegie Mellon University 03/29/2019 Today: Artificial neural networks Backpropagation Reading: Mitchell: Chapter 4 Bishop: Chapter

More information

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i )

Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). (v ji (1 x i ) + (1 v ji )x i ) Symmetric Networks Hertz, Krogh, Palmer: Introduction to the Theory of Neural Computation. Addison-Wesley Publishing Company (1991). How can we model an associative memory? Let M = {v 1,..., v m } be a

More information

Computational Intelligence Winter Term 2017/18

Computational Intelligence Winter Term 2017/18 Computational Intelligence Winter Term 207/8 Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Plan for Today Single-Layer Perceptron Accelerated Learning

More information

DFT & Fast Fourier Transform PART-A. 7. Calculate the number of multiplications needed in the calculation of DFT and FFT with 64 point sequence.

DFT & Fast Fourier Transform PART-A. 7. Calculate the number of multiplications needed in the calculation of DFT and FFT with 64 point sequence. SHRI ANGALAMMAN COLLEGE OF ENGINEERING & TECHNOLOGY (An ISO 9001:2008 Certified Institution) SIRUGANOOR,TRICHY-621105. DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING UNIT I DFT & Fast Fourier

More information

Tracking of Spread Spectrum Signals

Tracking of Spread Spectrum Signals Chapter 7 Tracking of Spread Spectrum Signals 7. Introduction As discussed in the last chapter, there are two parts to the synchronization process. The first stage is often termed acquisition and typically

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering

ADSP ADSP ADSP ADSP. Advanced Digital Signal Processing (18-792) Spring Fall Semester, Department of Electrical and Computer Engineering Advanced Digital Signal rocessing (18-792) Spring Fall Semester, 201 2012 Department of Electrical and Computer Engineering ROBLEM SET 8 Issued: 10/26/18 Due: 11/2/18 Note: This problem set is due Friday,

More information

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir

Supervised (BPL) verses Hybrid (RBF) Learning. By: Shahed Shahir Supervised (BPL) verses Hybrid (RBF) Learning By: Shahed Shahir 1 Outline I. Introduction II. Supervised Learning III. Hybrid Learning IV. BPL Verses RBF V. Supervised verses Hybrid learning VI. Conclusion

More information

FSA TM Multi-bit Digital Processors

FSA TM Multi-bit Digital Processors Laser Diagnostics FSA TM Multi-bit Digital Processors Revolutionary, State-of-the- Art Digital Signal Processing for Velocity and Size TSI is the only instrument supplier that developed two powerful, digital

More information

EXTENDED UHF RADAR OBSERVATIONS OF RIVER FLOW VELOCITY AND COMPARISONS WITH IN-SITU MEASUREMENTS

EXTENDED UHF RADAR OBSERVATIONS OF RIVER FLOW VELOCITY AND COMPARISONS WITH IN-SITU MEASUREMENTS Proceedings of the Ninth International Symposium on River Sedimentation October 18 21, 2004, Yichang, China EXTENDED UHF RADAR OBSERVATIONS OF RIVER FLOW VELOCITY AND COMPARISONS WITH IN-SITU MEASUREMENTS

More information

EXPONENTIAL STABILITY AND INSTABILITY OF STOCHASTIC NEURAL NETWORKS 1. X. X. Liao 2 and X. Mao 3

EXPONENTIAL STABILITY AND INSTABILITY OF STOCHASTIC NEURAL NETWORKS 1. X. X. Liao 2 and X. Mao 3 EXPONENTIAL STABILITY AND INSTABILITY OF STOCHASTIC NEURAL NETWORKS X. X. Liao 2 and X. Mao 3 Department of Statistics and Modelling Science University of Strathclyde Glasgow G XH, Scotland, U.K. ABSTRACT

More information

Introduction to Support Vector Machines

Introduction to Support Vector Machines Introduction to Support Vector Machines Hsuan-Tien Lin Learning Systems Group, California Institute of Technology Talk in NTU EE/CS Speech Lab, November 16, 2005 H.-T. Lin (Learning Systems Group) Introduction

More information

Synchronous vs asynchronous behavior of Hopfield's CAM neural net

Synchronous vs asynchronous behavior of Hopfield's CAM neural net K.F. Cheung, L.E. Atlas and R.J. Marks II, "Synchronous versus asynchronous behavior of Hopfield's content addressable memory", Applied Optics, vol. 26, pp.4808-4813 (1987). Synchronous vs asynchronous

More information

Research Article H Control Theory Using in the Air Pollution Control System

Research Article H Control Theory Using in the Air Pollution Control System Mathematical Problems in Engineering Volume 2013, Article ID 145396, 5 pages http://dx.doi.org/10.1155/2013/145396 Research Article H Control Theory Using in the Air Pollution Control System Tingya Yang,

More information

Intelligent Modular Neural Network for Dynamic System Parameter Estimation

Intelligent Modular Neural Network for Dynamic System Parameter Estimation Intelligent Modular Neural Network for Dynamic System Parameter Estimation Andrzej Materka Technical University of Lodz, Institute of Electronics Stefanowskiego 18, 9-537 Lodz, Poland Abstract: A technique

More information

THE information capacity is one of the most important

THE information capacity is one of the most important 256 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 1, JANUARY 1998 Capacity of Two-Layer Feedforward Neural Networks with Binary Weights Chuanyi Ji, Member, IEEE, Demetri Psaltis, Senior Member,

More information

Two Further Gradient BYY Learning Rules for Gaussian Mixture with Automated Model Selection

Two Further Gradient BYY Learning Rules for Gaussian Mixture with Automated Model Selection Two Further Gradient BYY Learning Rules for Gaussian Mixture with Automated Model Selection Jinwen Ma, Bin Gao, Yang Wang, and Qiansheng Cheng Department of Information Science, School of Mathematical

More information

Artificial Neural Networks Examination, March 2004

Artificial Neural Networks Examination, March 2004 Artificial Neural Networks Examination, March 2004 Instructions There are SIXTY questions (worth up to 60 marks). The exam mark (maximum 60) will be added to the mark obtained in the laborations (maximum

More information

ELEG 833. Nonlinear Signal Processing

ELEG 833. Nonlinear Signal Processing Nonlinear Signal Processing ELEG 833 Gonzalo R. Arce Department of Electrical and Computer Engineering University of Delaware arce@ee.udel.edu February 15, 2005 1 INTRODUCTION 1 Introduction Signal processing

More information

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes

Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Compressed Sensing Using Reed- Solomon and Q-Ary LDPC Codes Item Type text; Proceedings Authors Jagiello, Kristin M. Publisher International Foundation for Telemetering Journal International Telemetering

More information

Computational Intelligence

Computational Intelligence Plan for Today Single-Layer Perceptron Computational Intelligence Winter Term 00/ Prof. Dr. Günter Rudolph Lehrstuhl für Algorithm Engineering (LS ) Fakultät für Informatik TU Dortmund Accelerated Learning

More information

Humanoid Based Intelligence Control Strategy of Plastic Cement Die Press Work-Piece Forming Process for Polymer Plastics

Humanoid Based Intelligence Control Strategy of Plastic Cement Die Press Work-Piece Forming Process for Polymer Plastics Journal of Materials Science and Chemical Engineering, 206, 4, 9-6 Published Online June 206 in SciRes. http://www.scirp.org/journal/msce http://dx.doi.org/0.4236/msce.206.46002 Humanoid Based Intelligence

More information