Neural Networks Application to Reduction of Train Caused Distortions in Magnetotelluric Measurement Data

Similar documents
Neural Networks Introduction

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Neural Networks and the Back-propagation Algorithm

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Artificial Neural Networks

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Multilayer Perceptron

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

Artificial Neural Network

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Artificial Neural Networks. Edward Gatt

4. Multilayer Perceptrons

CS:4420 Artificial Intelligence

100 inference steps doesn't seem like enough. Many neuron-like threshold switching units. Many weighted interconnections among units

Artifical Neural Networks

Hand Written Digit Recognition using Kalman Filter

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

INTRODUCTION TO ARTIFICIAL INTELLIGENCE

Artificial Neural Networks Examination, June 2005

Intelligent Modular Neural Network for Dynamic System Parameter Estimation

Applying Artificial Neural Networks to problems in Geology. Ben MacDonell

Artificial Neural Networks

Instituto Tecnológico y de Estudios Superiores de Occidente Departamento de Electrónica, Sistemas e Informática. Introductory Notes on Neural Networks

Chapter - 3. ANN Approach for Efficient Computation of Logarithm and Antilogarithm of Decimal Numbers

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

Artificial Neural Network

Journal of Engineering Science and Technology Review 6 (2) (2013) Research Article. Received 25 June 2012; Accepted 15 January 2013

CSCI 315: Artificial Intelligence through Deep Learning

CS 6501: Deep Learning for Computer Graphics. Basics of Neural Networks. Connelly Barnes

From perceptrons to word embeddings. Simon Šuster University of Groningen

Artificial Intelligence (AI) Common AI Methods. Training. Signals to Perceptrons. Artificial Neural Networks (ANN) Artificial Intelligence

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Lecture 7 Artificial neural networks: Supervised learning

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

ESTIMATING THE ACTIVATION FUNCTIONS OF AN MLP-NETWORK

MT Prospecting. Map Resistivity. Determine Formations. Determine Structure. Targeted Drilling

k k k 1 Lecture 9: Applying Backpropagation Lecture 9: Applying Backpropagation 3 Lecture 9: Applying Backpropagation

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Sections 18.6 and 18.7 Analysis of Artificial Neural Networks

Deep Neural Networks (1) Hidden layers; Back-propagation

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

An Experimental Magnetotelluric Short Sounding During High Sun Spot Activity Cycle in 2013

Final Examination CS 540-2: Introduction to Artificial Intelligence

Application of Fully Recurrent (FRNN) and Radial Basis Function (RBFNN) Neural Networks for Simulating Solar Radiation

CS 4700: Foundations of Artificial Intelligence

Ch.8 Neural Networks

Computational Investigation on the Use of FEM and RBF Neural Network in the Inverse Electromagnetic Problem of Parameter Identification

Multilayer Perceptron = FeedForward Neural Network

Deep Neural Networks

3. Linear discrimination and single-layer neural networks

(Feed-Forward) Neural Networks Dr. Hajira Jabeen, Prof. Jens Lehmann

Deep Feedforward Networks

Portugaliae Electrochimica Acta 26/4 (2008)

Implementation of the Synergetic Computer Algorithm on AutoCAD Platform

CS4442/9542b Artificial Intelligence II prof. Olga Veksler. Lecture 5 Machine Learning. Neural Networks. Many presentation Ideas are due to Andrew NG

Deep Feedforward Networks

This paper presents the

Deep Feedforward Networks

ARTIFICIAL NEURAL NETWORKS APPROACH IN MICROWAVE FILTER TUNING

Neural Networks for Machine Learning. Lecture 2a An overview of the main types of neural network architecture

Optimal transfer function neural networks

Different Criteria for Active Learning in Neural Networks: A Comparative Study

Neural Networks and Deep Learning

Neural Networks biological neuron artificial neuron 1

Sections 18.6 and 18.7 Artificial Neural Networks

Noise Robust Isolated Words Recognition Problem Solving Based on Simultaneous Perturbation Stochastic Approximation Algorithm

Artificial Neural Networks Examination, March 2004

CSC Neural Networks. Perceptron Learning Rule

Internet Engineering Jacek Mazurkiewicz, PhD

Introduction Biologically Motivated Crude Model Backpropagation

arxiv: v1 [astro-ph.im] 20 Jan 2017

FEEDBACK GMDH-TYPE NEURAL NETWORK AND ITS APPLICATION TO MEDICAL IMAGE ANALYSIS OF LIVER CANCER. Tadashi Kondo and Junji Ueno

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Deep Feedforward Networks. Sargur N. Srihari

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

A predictive control system for concrete plants. Application of RBF neural networks for reduce dosing inaccuracies.

Data Mining Part 5. Prediction

Sections 18.6 and 18.7 Artificial Neural Networks

Lecture 4: Perceptrons and Multilayer Perceptrons

Multilayer Neural Networks

Artificial Neural Networks Examination, June 2004

Intelligent Handwritten Digit Recognition using Artificial Neural Network

Multilayer Neural Networks

Neural Networks. Nethra Sambamoorthi, Ph.D. Jan CRMportals Inc., Nethra Sambamoorthi, Ph.D. Phone:

Neural Networks. Nicholas Ruozzi University of Texas at Dallas

Comparative Application of Radial Basis Function and Multilayer Perceptron Neural Networks to Predict Traffic Noise Pollution in Tehran Roads

Engineering Part IIB: Module 4F10 Statistical Pattern Processing Lecture 6: Multi-Layer Perceptrons I

Calculation of the heat power consumption in the heat exchanger using artificial neural network

Artificial neural networks

Enhancing Generalization Capability of SVM Classifiers with Feature Weight Adjustment

APPLICATION OF RADIAL BASIS FUNCTION NEURAL NETWORK, TO ESTIMATE THE STATE OF HEALTH FOR LFP BATTERY

Neural network based models for the retrieval of methane concentration vertical profiles from remote sensing data

COMP-4360 Machine Learning Neural Networks

Neural Networks Learning the network: Backprop , Fall 2018 Lecture 4

Artificial Intelligence

Course 395: Machine Learning - Lectures

In the Name of God. Lectures 15&16: Radial Basis Function Networks

Transcription:

SCHEAE INFORM ATICAE VOLUM E 17/18 2009 Neural Networs Application to Reduction of Train Caused istortions in Magnetotelluric Measurement ata MARZENA BIELECKA 1, TOMASZ. ANEK 1, MAREK WOJYŁA 2, RZEORZ BARAN 3 1 epartament of eoinformatics and Applied Computer Science, AH University of Science and Technology, Miciewicza 30, 30-059 Kraów, Poland 2 PB eophysical Exploration Company Ltd., Jagiellońsa 76, 03-301 Warszawa, Poland 3 Institute of Computer Science, Jagiellonian University Prof. Stanisława Łojasiewicza 6, 30-348 Kraów, Poland e-mail: bieleca@agh.edu.pl, dane9@geolog.geol.agh.edu.pl Abstract. Artificial intelligence methods for MT data processing are proposed. istortions having a complex structure created by external artificial sources such as, for example, passing train were investigate. In the first part of this paper the time intervals with such type of distortions were found by using a special neuronal system. Next for time intervals found in the first stage the measure curve fragment is removed and then it is replied by the fragment created by a trained perceptron. The experiment showed that used method are effective. Keywords: MT data, neural networ. 1. Introduction Magnetotelluric (MT) is a one of the most popular and commonly used passive geophysical method. It is widely used in deep lithosphere research but recently a growing number of examples of using MT method in hydrocarbon

76 prospecting and earthuae prediction are observed. Magnetotelluric applies measuring of fluctuations in natural electric (E) and magnetic (H) fields in orthogonal directions at the surface of the Earth to determining the conductivity structure of the Earth at depths ranging from tens of meters to several hundreds of ilometers. Fig. 1. Simplified scheme of MT processing and interpretation

The fundamental theory of magnetotelluric was first proposed by Tihonov [1], and Cagniard [2]. Magnetotelluric data, called amplitude and phase sounding curves, are usually obtained by processing of measured time series, which describe electromagnetic variations in particular point on Earth surface. Sounding curves are next interpreted with various inversion methods (Fig. 1). But uality of magnetotelluric data often suffers from high noise level. The most common sources of this noise such as railways, electrical devices, thunderstorms etc. are very hard to avoid especially in urbanized areas. Typically MT signals are recorded in two separated points (measuring point and remote reference point) and then these two records are processed together. Fragments with high noise level are simply removed. But sometimes noise sources lie for example C railways are so intensive that whole record is affected and traditional processing is pointless. In this case, to obtain a solid sounding curve with limited error from raw and very noisy time series other, more effective and sophisticated method of noise reduction is needed. Nowadays the most promising ways of reduction of steady shape intensive noises are based on methods of artificial intelligence [3, 4, 5]. 77 2. Recognition and elimination distortions caused by trains In order to implement automatic system for train caused distortions recognition and elimination, 121 data sets have been collected. Each set consists of two measurements: (a) The first one consists of two time series E x (t ) and (t ), distortions. (b) The second one consists of two time series E x (t ) and (t ), H y without H y has been done at the same place, with distortions caused by trains. The time series consist of 57 till about 5000 measurement points. It was input date for the system. 2.1. Recognition of train caused distortions The distorted intervals maring in the time series E x (t ) was the first tas of the implemented system.

78 The data analysis showed that distortions caused by trains have a characteristic shape on the time series E x (t ) see Fig. 2. Fig. 2. Train caused distortions on the E x time series Fig. 3. An example of effect of approximating algorithm using the time series is approximated by a broen line

79 2.1.1. Train caused distortions recognition using artificial neural networs (ANNs) Learning and testing sets were prepared in the following way: a) The approximating broen line is calculated for each time series E x (t) (Fig. 3). b) Seuences of n ( n 1) points satisfying a given criterion were chosen. In this paper three various criteria were used to choose seuences for input data: (,..., a( l: + 1), (,..., a( l: + 1), O (,..., (,..., (,..., + n ( l: p m p m + 1 where: a ) is a slope of a straight line containing points p m and p m +1, d R. The criteria were applied in order to divide the set ), S n into various classes of problems which are solved by specialized ANNs. Thus, the criterion describes seuences in which the first line segment is described by the ascending affined function with great value of differential coefficient whereas corresponds to seuences in which the first line segment is described by the descending affined function with great absolute value of differential coefficient. The criterion O describes seuences in which the first line segment is described by the ascending or descending affined function with great absolute value of differential coefficient. The above criteria were used because train caused distortion begins from big increase or big decrease of the measured value E x. c) For each seuence (,..., Sn( K ) input vector was made. It was calculated in two following ways: W w (,..., = [ x( + ), y( + 1),..., x( + n 1, y( + n 1 + 1 n W (,..., = [ a( l: + ), d(, + 1),..., a( l: + n 1, d( + n 1, + 1 n where: x ( p ) m p m + 1 the first component of the vector p m p m + 1, y ( p ) m p m + 1 the second component of the vector p m p m + 1, )] )]

80 a ( l: p m p m + 1) slope of a straight line to which points p m and p m + 1 belong, d ( p m, p m + 1) Euclidean distance between points p m and p m + 1. d) A number value was assigned to each input vector in dependence on the investigated fragment of the time series E x (t ) was distorted or not: f ( W ) = 1 if vector W has been obtained from distorted time series E x (t ), f ( W ) = 0 if vector W has been obtained from undistorted time series E x (t ). About ¾ elements from the obtained set of pairs ( W, f( W )) were selected randomly and they constituted the learning set. Other elements were used as testing set. 2.1.2. ANNs architectures Two types ofanns were used: a) M ultilayered perceptron with one hidden layerconsisting ofsigm oid neurons [7]: 1 f x)= 1 + e (. The outputlayerconsists oflinearneurons. b) M ultilayernetwor with radialbasis neurons in the hidden layer[6]: where: 2 ( x) x w [ x 1,..., x ] [ w 1,..., w ] r = β, x= inputvector, x g x,..., x ) = f( r( x,..., x )), ( 1 1 w = a neuron weights vector, x w distance in -dim ensionaleuclidean space, β > 0 a param eter. An activation function: f x)= e x (. There are linearneurons in the outputlayer. The efficiency ofanns prediction was m easured using M ean Absolute Error.

81 2.1.3. The system recognizing train caused distortions For a given time series E x (t ) the train caused distortions recognizing algorithm is of the following form: 1. Estimate the time series E x (t ) for a parameter δ given by the user or calculated using statistic methods. 2. For each seuence of n broen line segments: a. chec if the seuence satisfies one of the established criterion (, or O ); b. chec the proper neural networ in dependence of which criterion is satisfied; if none is satisfied interrupt the algorithm; c. construct input vector according to W or W w ; d. activate ANN for the obtained input vector; e. remember of the output ANN value for each point t from the investigated interval. 3. For each point t of the graph the output value of the system is eual to arithmetic mean of all ANNs output values obtained in point 2.e. 4. Mar intervals of time series E x (t ) on which the output value of the system is greater than a given bias 0 γ 1 (default value is eual to 0.5) as intervals with distortions. The scheme of the system is shown in Fig. 4. Fig. 4. The scheme of train caused distortions recognizing system using criteria and

82 2.1.4. Obtained results The results are shown in Tab. 1. In systems number 4, 8, 9, 11 and 18 two various ANNs were used for line segments seuences satisfying criterion 0, 4 and 0, 4. System 18 was the best one its efficiency is eual to 75,13%. 2.2. Train caused distortions elimination The data set consists of 121 subsets. Each of them it is a double measurement of the electric Earth field with and without train caused distortions. The distorted signal was put onto the ANN input. To return a time series lie to the corresponding one without distortions was the ANN tas. The ANN has one hidden layer with sigmoidal neurons or radial-based neurons and linear neurons in the output layer. The number of neurons in the output layer was eual to the size of the input vector. Tab. 1. istortions recognition results. Notations: RFB radial based networ neural Networ with radial neuron in the hidden layer; WP multilayer perceptron; δ parameter determined for each time series using statistic methods The Param eter system δ num ber Input vector size 1 0,8 20 2 0,8 20 How the input vectorwas determined Criterion Numberof neurons in ANN hidden layer type MAE for testing set Efficiency 120 RFB 0,3833 72,15% 6 W P 0,3961 71,81% 3 0,8 30 W 4 0,8 30 W 0,4 30 W P 0,3621 0,4 31 RFB 0,3773 0,4 5 W P 0,3649 0,4 8 W P 0,3941 74,43% 73,51% 5 1,3 10 w 6 1,3 20 w 15 W P 0,368 61,49% O 63 RFB 0,3689 68,01%

The Param eter system δ num ber Input vector size 7 1,3 20 w How the input vectorwas determined Criterion Numberof neurons in ANN hidden layer type 83 MAE for testing set Efficiency 6 W P 0,3912 63,36% 8 1,3 20 W 9 1,3 20 W 0,4 22 W P 0,3769 0,4 5 W P 0,3877 0,4 3 W P 0,39 0,4 5 W P 0,3877 68,03% 67,52% 10 1,3 30 O 5 W P 0,3727 65,35% 11 1,3 30 W 0,4 2 W P 0,3487 0,4 4 W P 0,3669 66,17% 12 1,3 30 w 13 1,3 40 w 14 1,3 40 w 15 1,3 40 w 16 variable 10 17 variable 30 W 0, 4 23 W P 0,3545 67,22% O 18 RFB 0,359 61,59% 43 W P 0,3396 66,17% 102 RFB 0,302 62,03% 6 W P 0,4119 64,79% O 46 RFB 0,401 70,46% 18 variable 30 W w 0,4 50 W P 0,378 0,4 8 W P 0,3831 75,13% 19 variable 40 w 20 variable 40 21 variable 40 W 0, 4 W 0, 4 51 RFB 0,3901 71,55% 24 RFB 0,3922 68,82% 6 RFB 0,3992 68,40% Learning and testing sets were constructed in the following way: 1. Each time series ) E x (t was divided into fragments consisted of M points. 2. Each obtained fragment F (t ) was divided into m sub-fragments and the integral of each fragment has been calculated.

84 m 1 M () t = F ( mt+ i), m 1, t = 01, 2,,..., 1. i= 0 m 3. The data set was composed of input vectors with (0),..., M m 1 and corresponding output vectors ('0),..., M ' m 1 without distortions. 4. The ¾ of the data set was used as the learning set and ¼ as the testing set. The used data it is about 15% of needed data being necessary for learning the ANNs with the highest efficiency. Results are shown in Tab. 2 and Tab. 3. In Tab. 2 there are results that were obtained for the smaller data set consisted of 74 measurement subsets. In Tab. 3 there are result for bigger data set consisted of 121 measurement subsets. It is noticed that for the bigger data set the result is changed significantly. Tab. 2. Neural networs training results. The networs were used for elimination of train caused distortions. Notations: RFB radial based networ neural Networ with radial neuron in the hidden layer; WP multilayer perceptron Index M m Number of Number neurons of ANN in the hidden type last neurons layer MAE for learning set MAE for testing set Cardinality of the learning set Cardinality of the testing set Weights number in the neural networ 1 600 12 50 RFB 12 11,409 19,899 49 16 1200 2 300 12 25 RBF 10 19,89 23,565 142 47 500 3 300 6 50 RFB 11 10,108247 13,112735 142 47 1100 4 150 6 25 WP 20 152,93 179,178 324 108 1000 5 150 6 25 RFB 10 11,003 13,101 324 108 500 6 150 6 25 RFB 21 11,061 12,044 324 108 1050

85 Tab. 3. Results for ANNs used for train caused distortions elimination. Notations: RFB radial based networ neural Networ with radial neuron in the hidden layer; WP multilayer perceptron Index M m Number of neurons in the last layer ANN type Number of hidden neurons MAE for learning set MAE for testing set Cardinality of the learning set Cardinality of the testing set Weights number in the neural networ 1 600 12 50 RFB 6 16,367 24,855 38 12 600 2 300 12 25 RBF 10 22,959 28,115 105 35 500 3 300 6 50 RFB 16 10,020 15,669 105 35 1600 4 150 6 25 RFB 27 11,070 15,294 239 80 1350 3. Concluding remars In the case of train caused distortions the results should be regarded as preliminary because the number of learning examples is about 15% of the needed number. However, even is such situation the system achieved 75% efficiency, which, in this context, is a uite good result. In the case of ANNs which removes train caused distortions the results are very promising. To enlarge the learning set of about 50 examples improved outcomes significantly. This probably means that the obtained results can be improved by adding next examples to the learning set. Acnowledgem ents. This article was written w ithin the fram ew or of a research grant founded by the Polish Ministry ofscience and HigherEducation,no N 525 029 32 /2768. 4. References [1] Cagniard L.; Basic theory of the magnetotelluric method of geophysical prospecting, eophysics, 18, 1953, pp. 605 637. [2] Tihonov A.N.; The determination of the electrical properties of deep layers of the Earth s crust, ol. Acad. Nau. SSR, 73, 1950, pp. 295 297.

86 [3] Manoj C., Nagarajan N.; The application of artificial neural networs to magnetotelluric time-series analysis, eophysical Journal International, 153, 2003, pp. 409 423. [4] Mooramp M., Jones A.., Rao C.K.; Processing magnetotelluric time series with adaptive filters and neural networs, 18th EM Induction Worshop, El Vendrell, Spain, 2006. [5] Popova I.V., Ogawa Y.; Application of a modified Hopfield neural networ to noisy magnetotelluric data, Izvestiya Physics of the Solid Earth, Vol. 43,3, 2007, pp. 217 224. [6] Yee P.V., Hayin S.; Regularized Radial Basis Function Networs, Theory and Applications, 2001. [7] Tadeusiewicz R.; Sieci neuronowe, Aademica Oficyna Wydawnicza, 1993. Received ecember 29, 2008