Hand Written Digit Recognition Using Backpropagation Neural Network on Master-Slave Architecture
|
|
- Debra Weaver
- 5 years ago
- Views:
Transcription
1 J.V.S.Srinivas et al./ International Journal of Coputer Science & Engineering Technology (IJCSET) Hand Written Digit Recognition Using Backpropagation Neural Network on Master-Slave Architecture J.V.S.SRINIVAS DVR College of Engineering & Technology, Hyderabad. India N. SHIRISHA DRK Institute of Science & Technology, Hyderabad, India Abstract:- The objective of this work is to identify the hand written digits represented by rectangular box of 6x6 in a gray scale of 56 values. Backpropagation neural network (BPN) is one of the siplest odels of supervised training ulti layer neural networks. In this paper we design an BPN and train it with a set of hand written data. We also ipleent BPN on Master Slave architecture to iniize the learning tie. The perforance paraeters are evaluated for both sequential ipleentation and parallel ipleentation on Master Slave architecture. Keywords:- Backpropagation, Master-Slave architecture, Learning rate and Seeion Handwritten Digit Data Set. I. INTRODUCTION In the recent years handwriting nuber recognition is one of the challenging research proble. Many approaches such as iniu distance, decision tree and statistics have been developed to deal with handwriting nuber recognition probles. Alceu de Britto et al.[] proposed an approach for recognizing the handwritten nueral strings that relies on the two-stage HMM-based ethod. The ain objective for our syste was to recognize isolated Arabic digits exist in different applications. The studies were conducted on Seeion Handwritten Digit Data Set taken fro UCI achine learning repository. The dataset contains 593 handwritten digits fro around 80 persons that were scanned, stretched in a rectangular box 6x6 in a gray scale of 56 values. Then each pixel of each iage was scaled into a Boolean (/0) value using a fixed threshold. Each person wrote on a paper all the digits fro 0 to 9, twice. In general, parallel processing can be achieved in two different ways data set parallelis and algorithic parallelis. Algorithic parallelis offers two possible schees for the MLP: (i) apping each layer of the network to a processor and (ii) apping a block of neurons to a processor. A special case of second schee is to ap each neuron of the network to a processor so that the parallel coputer becoes a physical odel of the network. The proposed parallel ipleentation avoids recoputation of weights and requires less counication cycle per pattern. The counication of data aong the processors in the coputing network is also less. We obtain the perforance paraeters like speed-up, optial nuber of processors and processing tie for both sequential ipleentation and parallel ipleentation on Master Slave architecture. The analytical and experiental results show that the proposed parallel ipleentation perfors better than the sequential ipleentation. II. MATHEMATICAL MODEL In this paper, we consider a fully connected MLP network trained using BP algorith with oentu. We consider MLP with three layers (l=0,,) having N l neurons in each layer. The different phases in the learning algorith and corresponding processing tie are discussed in the following sections. A. Sequential Back Propagation Algorith The BP algorith is a supervised learning algorith, and is used to find suitable weights, such that for a given input pattern (X 0 ), the network output (Y i) should atch with the desired output (d i ). The algorith is divided into three phases, naely, forward phase, error BP phase, and weight update phase. The details on each of these phases and the tie taken to process are discussed below. Forward phase: For a given input pattern X 0, the activation values of hidden and output layer are coputed as follows: ISSN : Vol. 3 No. Nov 0 549
2 J.V.S.Srinivas et al./ International Journal of Coputer Science & Engineering Technology (IJCSET) Y Nl l l l i = f ( w ji X i ), i = 0,,,3,... N l ; l =,; X = j= l where w ji is the weight of the synapse fro the i th neuron in the (l-) layer to the j th neuron in the l th layer, y l j is the output of the j th neuron that belongs to the l th layer, and f(.) is the activation function (bipolar sigoid function). Let t, t a, and t ac be tie taken for one floating point ultiplication, addition, and calculation of activation value respectively. The tie taken to coplete the forward phase (T ) is given by T = N (N 0 + N ) M + (N + N ) t ac where M = t a + t Error back propagation phase: In this phase, the deviation between the network output and the desired value is back propagated to all the neurons through output weights. The δ i ter for i th neuron in output layer is given by δ i = (Y i d i ) f`(.) i=,,3,..n where f (.) is the first derivative of the activation function. The ter (δ i) for i th hidden neuron is given by N δ i = f (.) wjiδ i, i =,,3,... N j= The tie taken to coplete the error back propagation phase is represented by T and is calculated as T = ( + N ) N M + (N + N ) t ac Weight update phase: In this phase, the network weights are updated and the updation process of any w l j depends on the value of δ l j and y l- j. Δw l ji = η δ l iy l- j + μδ w l- ji l =, Where η the learning is rate and μ is the oentu factor. The tie taken to update the weight atrix between the three layers is represented by T 3 and it is equal to T 3 = N (N 0 + N ) M + (N 0 + N ) N t ac Let t ac = β M. The total processing tie (T seq ) for training a single pattern in one iteration is the su of the tie taken to process the three phases and is given as T seq = T + T + T 3 = (N K + N γ) M + N (N 0 + N ) t ac Where k = N 0 + 3N + β and γ = + β B. Parallel Ipleentation: In this ipleentation, the hidden layer is partitioned using vertical parallelis and weight connections are partitioned on the basis of synaptic level parallelis. The Master Slave architecture of the MLP network used in the proposed schee is shown in the fig.. In this architecture there is aster and slave processors. The output layer is placed on front end processor, hidden layer is partitioned into N / neurons and the partitions are placed on slave processors. The input neurons are placed on all the slave processors. In this architecture the counication is only between the aster and Slave processors. Each of the Slave processor with aster executes three phases of BP training algorith. Parallel execution of the three phases and the corresponding processing tie for each phases are calculated. Y ISSN : Vol. 3 No. Nov 0 550
3 J.V.S.Srinivas et al./ International Journal of Coputer Science & Engineering Technology (IJCSET) Fig: Forward phase: In this phase, the activation values of the neurons are calculated. The activation value for the hidden neurons is calculated at the slave processors. But, the activation value for the output neurons is calculated at the aster. The activation values and the weight connections of neurons present in slave processors are send to aster and then activation values of output neurons are calculated. The approxiate tie taken to process the forward phase (T ) is given by. T = [ + N N 0 ] N M + [ N +N ] t ac + N t a + t co where M = t a + t Error propagation phase: In this phase, each processor calculates the error ters. The approxiate tie taken for processing error back-propagation phase (T M ) includes the error calculation at the FEP and at the slave proc4essors and is T = (N + ) N M/ + {N / + N } t ac + t co Weight update phase: In this phase, the weight connections between the neurons of the output layer and hidden layer are updated (we also update the weight connections between hidden and input layers). To update the weight connection between the i th hidden neuron and j th input neuron, we need activation values at the jth input neuron and δ i ter in the ith hidden neuron. Since both the activation value as well as the error ter are present in the processor, no counication is required. The processing tie to update the weight connections (T M 3) is given by T 3 = (N o + N ) N M/ + ((N o + N ) N / ) t Let T M be the tie taken to train a single pattern in proposed HP schee, and it is equal to the algebraic su of tie taken to process all the three phases. T P = M[(N 0 + N ) N + (N + ) N ]/ + [ N +N ] t ac + (N o + N ) N t / + M log {σ + µ[ (N /) +N ]} Here t co = log p [ T ini + T c D] where p is the nuber of processors, D is data size( nuber of words), T ini is startup tie and T c is per word transfer tie.[] C. Analytical Perforance Coparison: Maxiu nuber of processors: In general nuber of processors in parallel ipleentation in the worst case is equal to nuber of hidden neurons. Increase in the nuber of processors does not iprove the perforance after a certain stage. Speed up analysis: Speed-up for -processor syste is the ratio between the tie taken by uniprocessor to the tie taken by parallel algorith in -processor network. The speed-up ratio for parallel ipleentation can be forulated as S = T seq / T P = [(N k + N β)m + N (N 0 + N ) t ac ] {M[(N 0 + N ) N + (N + ) N ]/ + [ N +N ] t ac + (N o + N ) N t / + M log {σ + µ[ (N /) +N ]}} - ISSN : Vol. 3 No. Nov 0 55
4 J.V.S.Srinivas et al./ International Journal of Coputer Science & Engineering Technology (IJCSET) If the network size is extreely larger than the nuber of processors, then the speed up ratio will approach. This is due to extra coputation required in weight update phase and extra counication in exchanging the hidden neurons activation values. Optial nuber of processors: Fro the expression of T P, it is clear that if we increase the nuber of processors, the tie taken to counicate will also increase and the tie taken for coputation will decrease. The total processing tie will decrease first and then increase after a certain nuber of processors. So, there exists an optial uber of processors * for which processing tie is iniu. We calculate the closed for expression for optial nuber of processors by partially differentiating the training tie expression T P with respect to and then equating it to zero. Difference between processing ties: Fro the expressions of T seq and T P the difference in processing tie is calculated as follows. T seq T P = [(N k + N β)m + N (N 0 + N ) t ac ] {M[(N 0 + N ) N + (N + ) N ]/ + [ N +N ] t ac + (N o + N ) N t / + M log {σ + µ[ (N /) +N ]}} Assue that N = N = N 3 = = N. Then T seq T P [5N 5N(σ + μ) 7 + β(n + 4N 4)]t a + [5N + N 5N(σ + μ) 0 + β(n + 4N 4)] t > 0 Fro the above equation it is clear that the tie taken by parallel ipleentation is less than the tie taken by the sequential algorith. III. EXPERIMENTAL RESULTS In this section, we present the perforance of the proposed parallel algorith on aster-salve architecture and copare the results with the sequential approach. Both the algoriths are ipleented in Sparc-II processor based sun workstations connected through Ethernet. There was a substantial decrease in the learning tie of the networks in parallel ipleentation. The results are shown graphically in fig-. Learning Rate:0.. Moentu Factor:0.5 Tie in Sec 8 6 Tie Coparison Basic Sequential Parallel 0 Algoriths Fig- IV. CONCLUSION In this paper, an ipleentation of distributed BP algorith to train MLP network with single hidden layer on a cluster of coputers connected over Ethernet LAN is presented. Hybrid partitioning technique is proposed to partition the network. The partitioned network is apped onto Master-Slave architecture. The analytical perforance of the proposed algorith is copared with the vertical partitioning. Using the hybridpartitioning schee, recoputation of weights is avoided and the counication tie is reduced. As one ISSN : Vol. 3 No. Nov 0 55
5 J.V.S.Srinivas et al./ International Journal of Coputer Science & Engineering Technology (IJCSET) hidden layer is adequate for a large nuber of applications, in the present project work the algorith is developed for neural nets with one hidden layer ACKNOWLEDGEMENTS We acknowledge UCI achine learning repository for giving the opportunity to use the Seeion Handwritten Digit Data Set to train and test our odel. REFERENCES [] Christopher M. Bishop, Neural Networks for pattern recognition, 995. [] Ji Torresen and Shinji Toit, A Review of Parallel Ipleentations of Back propagation Neural Networks, citeseer.ist.psu.edu/ htl. [3] R. Greenalw, H. Hoover, and W. Ruzzo. Liits to Parallel Coputation, Oxford university press, 995. [4] V. Kuar, S. Shekar, and M.B. Ain, A Scalable Parallel Forulation of the Back-Propagation Algorith for Hypercubes and Related Architectures, IEEE Trans. Parallel and Distributed Systes, vol.5, no.0,pp , Oct [5] Czauderna, T.; Seiffert, U.: Ipleentation of MLP networks running backpropagation on various parallel coputer hardware using MPI. Proceedings of the 5 th International Conference on Recent Advances in Soft Coputing (RASC), Nottingha, United Kingdo, Deceber 6-8, 004, pp.6-. [ISBN ] [6] Han-wook Lee and Chan-ik, An Efficeint Parallel Block Backpropagationn Learning Algorith in Transputer- Based Mesh- Connected Parallel Coputers, IEICE Trans. Inf. & Syst., vol. E83-D, No.8, August, 000. [7] Faraarz valafar and Okan K. Ersoy, Parallel Ipleentation of Back-Propagation Neural Network on MASPAR MP- [8] Laurence Fausett, Fundaentals of neural networks; Architectures, algoriths and applications, PEA, 005 [9] S. Haykins, Neural Networks_A Coprehensive Foundation. PEA, 004. [0] Jacek. M.Zurada, Introduction to Artificial Neural systes, 004. [] Alceu de Britto, S., R. Sabourin, F. Bortolozzi and Y. Ching Suen, 003. The recognition of handwritten nueral strings using a twostage HMM-based ethod. Int. J. Docu. Anal. Recog., 5: 0-7. DOI: 0.007/s [] Annanth Graa, Anshul Gupta, George Karypis, Vipin Kuar Introduction to parallel coputing, PEA, 004. [3] W. Richard Stevens.Unix Network Prograing Prentice Hall. [4] N. Sundararajan and P. Saratchandran, Parallel Architecture for Artificial Neural Networks, IEEE CS Press, 998. [5] V. Sudhakar, C. Siva, and R. Murthy, Efficient Mapping of Back-Propagation Algorith onto a Network of Workstations, IEEE Trans. Man, Machine, and Cybernetics-Part B: Cybernetics, vol.8, no.6, pp , 998. ISSN : Vol. 3 No. Nov 0 553
Pattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationIntelligent Systems: Reasoning and Recognition. Artificial Neural Networks
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationIntelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial
More informationPattern Classification using Simplified Neural Networks with Pruning Algorithm
Pattern Classification using Siplified Neural Networks with Pruning Algorith S. M. Karuzzaan 1 Ahed Ryadh Hasan 2 Abstract: In recent years, any neural network odels have been proposed for pattern classification,
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3
More informationCHARACTER RECOGNITION USING A SELF-ADAPTIVE TRAINING
CHARACTER RECOGNITION USING A SELF-ADAPTIVE TRAINING Dr. Eng. Shasuddin Ahed $ College of Business and Econoics (AACSB Accredited) United Arab Eirates University, P O Box 7555, Al Ain, UAE. and $ Edith
More informationFeedforward Networks
Feedforward Networks Gradient Descent Learning and Backpropagation Christian Jacob CPSC 433 Christian Jacob Dept.of Coputer Science,University of Calgary CPSC 433 - Feedforward Networks 2 Adaptive "Prograing"
More informationCh 12: Variations on Backpropagation
Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith
More informationNBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University
NBN Algorith Bogdan M. Wilaoswki Auburn University Hao Yu Auburn University Nicholas Cotton Auburn University. Introduction. -. Coputational Fundaentals - Definition of Basic Concepts in Neural Network
More informationFeedforward Networks. Gradient Descent Learning and Backpropagation. Christian Jacob. CPSC 533 Winter 2004
Feedforward Networks Gradient Descent Learning and Backpropagation Christian Jacob CPSC 533 Winter 2004 Christian Jacob Dept.of Coputer Science,University of Calgary 2 05-2-Backprop-print.nb Adaptive "Prograing"
More informationVariations on Backpropagation
2 Variations on Backpropagation 2 Variations Heuristic Modifications Moentu Variable Learning Rate Standard Nuerical Optiization Conjugate Gradient Newton s Method (Levenberg-Marquardt) 2 2 Perforance
More informationFeedforward Networks
Feedforward Neural Networks - Backpropagation Feedforward Networks Gradient Descent Learning and Backpropagation CPSC 533 Fall 2003 Christian Jacob Dept.of Coputer Science,University of Calgary Feedforward
More informationQualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science
Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:
More informationFast Montgomery-like Square Root Computation over GF(2 m ) for All Trinomials
Fast Montgoery-like Square Root Coputation over GF( ) for All Trinoials Yin Li a, Yu Zhang a, a Departent of Coputer Science and Technology, Xinyang Noral University, Henan, P.R.China Abstract This letter
More informationNon-Parametric Non-Line-of-Sight Identification 1
Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,
More informationMulti-view Discriminative Manifold Embedding for Pattern Classification
Multi-view Discriinative Manifold Ebedding for Pattern Classification X. Wang Departen of Inforation Zhenghzou 450053, China Y. Guo Departent of Digestive Zhengzhou 450053, China Z. Wang Henan University
More informationRandomized Recovery for Boolean Compressed Sensing
Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch
More informationKernel Methods and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic
More informationSupport Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization
Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering
More informationDepartment of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China
6th International Conference on Machinery, Materials, Environent, Biotechnology and Coputer (MMEBC 06) Solving Multi-Sensor Multi-Target Assignent Proble Based on Copositive Cobat Efficiency and QPSO Algorith
More informationPattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition
More informationNonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy
Storage Capacity and Dynaics of Nononotonic Networks Bruno Crespi a and Ignazio Lazzizzera b a. IRST, I-38050 Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I-38050 Povo (Trento) Italy INFN Gruppo
More informationThis model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.
CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when
More informationA MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION
A eshsize boosting algorith in kernel density estiation A MESHSIZE BOOSTING ALGORITHM IN KERNEL DENSITY ESTIMATION C.C. Ishiekwene, S.M. Ogbonwan and J.E. Osewenkhae Departent of Matheatics, University
More informationBoosting with log-loss
Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the
More informationAlgorithms for parallel processor scheduling with distinct due windows and unit-time jobs
BULLETIN OF THE POLISH ACADEMY OF SCIENCES TECHNICAL SCIENCES Vol. 57, No. 3, 2009 Algoriths for parallel processor scheduling with distinct due windows and unit-tie obs A. JANIAK 1, W.A. JANIAK 2, and
More informationApproximation in Stochastic Scheduling: The Power of LP-Based Priority Policies
Approxiation in Stochastic Scheduling: The Power of -Based Priority Policies Rolf Möhring, Andreas Schulz, Marc Uetz Setting (A P p stoch, r E( w and (B P p stoch E( w We will assue that the processing
More informationEMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS
EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS Jochen Till, Sebastian Engell, Sebastian Panek, and Olaf Stursberg Process Control Lab (CT-AST), University of Dortund,
More informatione-companion ONLY AVAILABLE IN ELECTRONIC FORM
OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer
More informationOn the Communication Complexity of Lipschitzian Optimization for the Coordinated Model of Computation
journal of coplexity 6, 459473 (2000) doi:0.006jco.2000.0544, available online at http:www.idealibrary.co on On the Counication Coplexity of Lipschitzian Optiization for the Coordinated Model of Coputation
More informationSupport Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab
Support Vector Machines Machine Learning Series Jerry Jeychandra Bloh Lab Outline Main goal: To understand how support vector achines (SVMs) perfor optial classification for labelled data sets, also a
More informationVulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Time-Varying Jamming Links
Vulnerability of MRD-Code-Based Universal Secure Error-Correcting Network Codes under Tie-Varying Jaing Links Jun Kurihara KDDI R&D Laboratories, Inc 2 5 Ohara, Fujiino, Saitaa, 356 8502 Japan Eail: kurihara@kddilabsjp
More informationHomework 3 Solutions CSE 101 Summer 2017
Hoework 3 Solutions CSE 0 Suer 207. Scheduling algoriths The following n = 2 jobs with given processing ties have to be scheduled on = 3 parallel and identical processors with the objective of iniizing
More informationThe Methods of Solution for Constrained Nonlinear Programming
Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 01-06 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.co The Methods of Solution for Constrained
More informationZISC Neural Network Base Indicator for Classification Complexity Estimation
ZISC Neural Network Base Indicator for Classification Coplexity Estiation Ivan Budnyk, Abdennasser Сhebira and Kurosh Madani Iages, Signals and Intelligent Systes Laboratory (LISSI / EA 3956) PARIS XII
More informationP016 Toward Gauss-Newton and Exact Newton Optimization for Full Waveform Inversion
P016 Toward Gauss-Newton and Exact Newton Optiization for Full Wavefor Inversion L. Métivier* ISTerre, R. Brossier ISTerre, J. Virieux ISTerre & S. Operto Géoazur SUMMARY Full Wavefor Inversion FWI applications
More informationDefect-Aware SOC Test Scheduling
Defect-Aware SOC Test Scheduling Erik Larsson +, Julien Pouget*, and Zebo Peng + Ebedded Systes Laboratory + LIRMM* Departent of Coputer Science Montpellier 2 University Linköpings universitet CNRS Sweden
More informationEnsemble Based on Data Envelopment Analysis
Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807
More informationHIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES
ICONIC 2007 St. Louis, O, USA June 27-29, 2007 HIGH RESOLUTION NEAR-FIELD ULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR ACHINES A. Randazzo,. A. Abou-Khousa 2,.Pastorino, and R. Zoughi
More informationA note on the multiplication of sparse matrices
Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani
More informationWarning System of Dangerous Chemical Gas in Factory Based on Wireless Sensor Network
565 A publication of CHEMICAL ENGINEERING TRANSACTIONS VOL. 59, 07 Guest Editors: Zhuo Yang, Junie Ba, Jing Pan Copyright 07, AIDIC Servizi S.r.l. ISBN 978-88-95608-49-5; ISSN 83-96 The Italian Association
More informationCS Lecture 13. More Maximum Likelihood
CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood
More informationSPECTRUM sensing is a core concept of cognitive radio
World Acadey of Science, Engineering and Technology International Journal of Electronics and Counication Engineering Vol:6, o:2, 202 Efficient Detection Using Sequential Probability Ratio Test in Mobile
More informationThe Algorithms Optimization of Artificial Neural Network Based on Particle Swarm
Send Orders for Reprints to reprints@benthascience.ae The Open Cybernetics & Systeics Journal, 04, 8, 59-54 59 Open Access The Algoriths Optiization of Artificial Neural Network Based on Particle Swar
More informationA Better Algorithm For an Ancient Scheduling Problem. David R. Karger Steven J. Phillips Eric Torng. Department of Computer Science
A Better Algorith For an Ancient Scheduling Proble David R. Karger Steven J. Phillips Eric Torng Departent of Coputer Science Stanford University Stanford, CA 9435-4 Abstract One of the oldest and siplest
More informationA Unified Approach to Universal Prediction: Generalized Upper and Lower Bounds
646 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL 6, NO 3, MARCH 05 A Unified Approach to Universal Prediction: Generalized Upper and Lower Bounds Nuri Denizcan Vanli and Suleyan S Kozat,
More informationIdentical Maximum Likelihood State Estimation Based on Incremental Finite Mixture Model in PHD Filter
Identical Maxiu Lielihood State Estiation Based on Increental Finite Mixture Model in PHD Filter Gang Wu Eail: xjtuwugang@gail.co Jing Liu Eail: elelj20080730@ail.xjtu.edu.cn Chongzhao Han Eail: czhan@ail.xjtu.edu.cn
More informationAn Improved Particle Filter with Applications in Ballistic Target Tracking
Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing
More informationCHAPTER 2 LITERATURE SURVEY, POWER SYSTEM PROBLEMS AND NEURAL NETWORK TOPOLOGIES
7 CHAPTER LITERATURE SURVEY, POWER SYSTEM PROBLEMS AND NEURAL NETWORK TOPOLOGIES. GENERAL This chapter presents detailed literature survey, briefly discusses the proposed probles, the chosen neural architectures
More informationINTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN
INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN V.A. Koarov 1, S.A. Piyavskiy 2 1 Saara National Research University, Saara, Russia 2 Saara State Architectural University, Saara, Russia Abstract. This article
More informationARTICLE IN PRESS. Murat Hüsnü Sazlı a,,canişık b. Syracuse, NY 13244, USA
S1051-200406)00002-9/FLA AID:621 Vol ) [+odel] P1 1-7) YDSPR:3SC+ v 153 Prn:13/02/2006; 15:33 ydspr621 by:laurynas p 1 Digital Signal Processing ) wwwelsevierco/locate/dsp Neural network ipleentation of
More informationKernel based Object Tracking using Color Histogram Technique
Kernel based Object Tracking using Color Histogra Technique 1 Prajna Pariita Dash, 2 Dipti patra, 3 Sudhansu Kuar Mishra, 4 Jagannath Sethi, 1,2 Dept of Electrical engineering, NIT, Rourkela, India 3 Departent
More informationarxiv: v1 [cs.cv] 28 Aug 2015
Discrete Hashing with Deep Neural Network Thanh-Toan Do Anh-Zung Doan Ngai-Man Cheung Singapore University of Technology and Design {thanhtoan do, dung doan, ngaian cheung}@sutd.edu.sg arxiv:58.748v [cs.cv]
More informationConvex Programming for Scheduling Unrelated Parallel Machines
Convex Prograing for Scheduling Unrelated Parallel Machines Yossi Azar Air Epstein Abstract We consider the classical proble of scheduling parallel unrelated achines. Each job is to be processed by exactly
More informationThe Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate
The Siplex Method is Strongly Polynoial for the Markov Decision Proble with a Fixed Discount Rate Yinyu Ye April 20, 2010 Abstract In this note we prove that the classic siplex ethod with the ost-negativereduced-cost
More informationAn improved self-adaptive harmony search algorithm for joint replenishment problems
An iproved self-adaptive harony search algorith for joint replenishent probles Lin Wang School of Manageent, Huazhong University of Science & Technology zhoulearner@gail.co Xiaojian Zhou School of Manageent,
More informationA Smoothed Boosting Algorithm Using Probabilistic Output Codes
A Soothed Boosting Algorith Using Probabilistic Output Codes Rong Jin rongjin@cse.su.edu Dept. of Coputer Science and Engineering, Michigan State University, MI 48824, USA Jian Zhang jian.zhang@cs.cu.edu
More informationOptimal Resource Allocation in Multicast Device-to-Device Communications Underlaying LTE Networks
1 Optial Resource Allocation in Multicast Device-to-Device Counications Underlaying LTE Networks Hadi Meshgi 1, Dongei Zhao 1 and Rong Zheng 2 1 Departent of Electrical and Coputer Engineering, McMaster
More informationPredictive Vaccinology: Optimisation of Predictions Using Support Vector Machine Classifiers
Predictive Vaccinology: Optiisation of Predictions Using Support Vector Machine Classifiers Ivana Bozic,2, Guang Lan Zhang 2,3, and Vladiir Brusic 2,4 Faculty of Matheatics, University of Belgrade, Belgrade,
More informationbe the original image. Let be the Gaussian white noise with zero mean and variance σ. g ( x, denotes the motion blurred image with noise.
3rd International Conference on Multiedia echnology(icm 3) Estiation Forula for Noise Variance of Motion Blurred Iages He Weiguo Abstract. he ey proble in restoration of otion blurred iages is how to get
More informationNeural Networks Trained with the EEM Algorithm: Tuning the Smoothing Parameter
eural etworks Trained wit te EEM Algorit: Tuning te Sooting Paraeter JORGE M. SATOS,2, JOAQUIM MARQUES DE SÁ AD LUÍS A. ALEXADRE 3 Intituto de Engenaria Bioédica, Porto, Portugal 2 Instituto Superior de
More informationNeural networks. Chapter 19, Sections 1 5 1
Neural networks Chapter 19, Sections 1 5 Chapter 19, Sections 1 5 1 Outline Brains Neural networks Perceptrons Multilayer perceptrons Applications of neural networks Chapter 19, Sections 1 5 2 Brains 10
More informationVI. Backpropagation Neural Networks (BPNN)
VI. Backpropagation Neural Networks (BPNN) Review of Adaline Newton s ethod Backpropagation algorith definition derivative coputation weight/bias coputation function approxiation exaple network generalization
More informationUNIVERSITY OF TRENTO ON THE USE OF SVM FOR ELECTROMAGNETIC SUBSURFACE SENSING. A. Boni, M. Conci, A. Massa, and S. Piffer.
UIVRSITY OF TRTO DIPARTITO DI IGGRIA SCIZA DLL IFORAZIO 3823 Povo Trento (Italy) Via Soarive 4 http://www.disi.unitn.it O TH US OF SV FOR LCTROAGTIC SUBSURFAC SSIG A. Boni. Conci A. assa and S. Piffer
More informationSupport Vector Machines. Goals for the lecture
Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed
More informationNew Slack-Monotonic Schedulability Analysis of Real-Time Tasks on Multiprocessors
New Slack-Monotonic Schedulability Analysis of Real-Tie Tasks on Multiprocessors Risat Mahud Pathan and Jan Jonsson Chalers University of Technology SE-41 96, Göteborg, Sweden {risat, janjo}@chalers.se
More informationE. Alpaydın AERFAISS
E. Alpaydın AERFAISS 00 Introduction Questions: Is the error rate of y classifier less than %? Is k-nn ore accurate than MLP? Does having PCA before iprove accuracy? Which kernel leads to highest accuracy
More informationList Scheduling and LPT Oliver Braun (09/05/2017)
List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)
More informationLocal Maximum Ozone Concentration Prediction Using LSTM Recurrent Neural Networks
Local Maxiu Ozone Concentration Prediction Using LSTM Recurrent Neural Networks Sabrine Ribeiro René Alquézar Dept. Llenguatges i Sistees Inforàtics Universitat Politècnica de Catalunya Capus Nord, Mòdul
More informationOptimal Control of Nonlinear Systems Using the Shifted Legendre Polynomials
Optial Control of Nonlinear Systes Using Shifted Legendre Polynoi Rahan Hajohaadi 1, Mohaad Ali Vali 2, Mahoud Saavat 3 1- Departent of Electrical Engineering, Shahid Bahonar University of Keran, Keran,
More informationŞtefan ŞTEFĂNESCU * is the minimum global value for the function h (x)
7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not
More informationExperimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis
City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna
More informationEfficient Filter Banks And Interpolators
Efficient Filter Banks And Interpolators A. G. DEMPSTER AND N. P. MURPHY Departent of Electronic Systes University of Westinster 115 New Cavendish St, London W1M 8JS United Kingdo Abstract: - Graphical
More informationBoundary and Region based Moments Analysis for Image Pattern Recognition
ISSN 1746-7659, England, UK Journal of Inforation and Coputing Science Vol. 10, No. 1, 015, pp. 040-045 Boundary and Region based Moents Analysis for Iage Pattern Recognition Dr. U Ravi Babu 1, Dr Md Mastan
More informationSupervised Baysian SAR image Classification Using The Full Polarimetric Data
Supervised Baysian SAR iage Classification Using The Full Polarietric Data (1) () Ziad BELHADJ (1) SUPCOM, Route de Raoued 3.5 083 El Ghazala - TUNSA () ENT, BP. 37, 100 Tunis Belvedere, TUNSA Abstract
More informationSupport Vector Machines MIT Course Notes Cynthia Rudin
Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance
More informationNeural networks. Chapter 20. Chapter 20 1
Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms
More informationOPTIMIZATION OF SPECIFIC FACTORS TO PRODUCE SPECIAL ALLOYS
5 th International Conference Coputational Mechanics and Virtual Engineering COMEC 2013 24-25 October 2013, Braşov, Roania OPTIMIZATION OF SPECIFIC FACTORS TO PRODUCE SPECIAL ALLOYS I. Milosan 1 1 Transilvania
More informationFault Diagnosis of Planetary Gear Based on Fuzzy Entropy of CEEMDAN and MLP Neural Network by Using Vibration Signal
ITM Web of Conferences 11, 82 (217) DOI: 1.151/ itconf/2171182 IST217 Fault Diagnosis of Planetary Gear Based on Fuzzy Entropy of CEEMDAN and MLP Neural Networ by Using Vibration Signal Xi-Hui CHEN, Gang
More informationA Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness
A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,
More informationOn Constant Power Water-filling
On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives
More informationCOS 424: Interacting with Data. Written Exercises
COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well
More informationANALYSIS OF REFLECTOR AND HORN ANTENNAS USING MULTILEVEL FAST MULTIPOLE ALGORITHM
European Congress on Coputational Methods in Applied Sciences and Engineering ECCOMAS 2 Barcelona, 11-14 Septeber 2 ECCOMAS ANALYSIS OF REFLECTOR AND HORN ANTENNAS USING MULTILEVEL FAST MULTIPOLE ALGORITHM
More informationStatistical clustering and Mineral Spectral Unmixing in Aviris Hyperspectral Image of Cuprite, NV
CS229 REPORT, DECEMBER 05 1 Statistical clustering and Mineral Spectral Unixing in Aviris Hyperspectral Iage of Cuprite, NV Mario Parente, Argyris Zynis I. INTRODUCTION Hyperspectral Iaging is a technique
More informationA Novel Activity Detection Method
A Novel Activity Detection Method Gismy George P.G. Student, Department of ECE, Ilahia College of,muvattupuzha, Kerala, India ABSTRACT: This paper presents an approach for activity state recognition of
More informationarxiv: v1 [cs.lg] 8 Jan 2019
Data Masking with Privacy Guarantees Anh T. Pha Oregon State University phatheanhbka@gail.co Shalini Ghosh Sasung Research shalini.ghosh@gail.co Vinod Yegneswaran SRI international vinod@csl.sri.co arxiv:90.085v
More informationGrafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space
Journal of Machine Learning Research 3 (2003) 1333-1356 Subitted 5/02; Published 3/03 Grafting: Fast, Increental Feature Selection by Gradient Descent in Function Space Sion Perkins Space and Reote Sensing
More informationEffective joint probabilistic data association using maximum a posteriori estimates of target states
Effective joint probabilistic data association using axiu a posteriori estiates of target states 1 Viji Paul Panakkal, 2 Rajbabu Velurugan 1 Central Research Laboratory, Bharat Electronics Ltd., Bangalore,
More informationRecovering Data from Underdetermined Quadratic Measurements (CS 229a Project: Final Writeup)
Recovering Data fro Underdeterined Quadratic Measureents (CS 229a Project: Final Writeup) Mahdi Soltanolkotabi Deceber 16, 2011 1 Introduction Data that arises fro engineering applications often contains
More informationA Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair
Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving
More informationNeural Networks and the Back-propagation Algorithm
Neural Networks and the Back-propagation Algorithm Francisco S. Melo In these notes, we provide a brief overview of the main concepts concerning neural networks and the back-propagation algorithm. We closely
More informationRECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS MEMBRANE
Proceedings of ICIPE rd International Conference on Inverse Probles in Engineering: Theory and Practice June -8, 999, Port Ludlow, Washington, USA : RECOVERY OF A DENSITY FROM THE EIGENVALUES OF A NONHOMOGENEOUS
More informationIntelligent Handwritten Digit Recognition using Artificial Neural Network
RESEARCH ARTICLE OPEN ACCESS Intelligent Handwritten Digit Recognition using Artificial Neural Networ Saeed AL-Mansoori Applications Development and Analysis Center (ADAC), Mohammed Bin Rashid Space Center
More informationMultilayer Neural Networks
Multilayer Neural Networs Brain s. Coputer Designed to sole logic and arithetic probles Can sole a gazillion arithetic and logic probles in an hour absolute precision Usually one ery fast procesor high
More informationACTIVE VIBRATION CONTROL FOR STRUCTURE HAVING NON- LINEAR BEHAVIOR UNDER EARTHQUAKE EXCITATION
International onference on Earthquae Engineering and Disaster itigation, Jaarta, April 14-15, 8 ATIVE VIBRATION ONTROL FOR TRUTURE HAVING NON- LINEAR BEHAVIOR UNDER EARTHQUAE EXITATION Herlien D. etio
More informationPredicting FTSE 100 Close Price Using Hybrid Model
SAI Intelligent Systes Conference 2015 Noveber 10-11, 2015 London, UK Predicting FTSE 100 Close Price Using Hybrid Model Bashar Al-hnaity, Departent of Electronic and Coputer Engineering, Brunel University
More informationArithmetic Unit for Complex Number Processing
Abstract Arithetic Unit or Coplex Nuber Processing Dr. Soloon Khelnik, Dr. Sergey Selyutin, Alexandr Viduetsky, Inna Doubson, Seion Khelnik This paper presents developent o a coplex nuber arithetic unit
More informationP032 3D Seismic Diffraction Modeling in Multilayered Media in Terms of Surface Integrals
P032 3D Seisic Diffraction Modeling in Multilayered Media in Ters of Surface Integrals A.M. Aizenberg (Institute of Geophysics SB RAS, M. Ayzenberg* (Norwegian University of Science & Technology, H.B.
More informationA Theoretical Analysis of a Warm Start Technique
A Theoretical Analysis of a War Start Technique Martin A. Zinkevich Yahoo! Labs 701 First Avenue Sunnyvale, CA Abstract Batch gradient descent looks at every data point for every step, which is wasteful
More information