Prediction of chenille yarn and fabric abrasion resistance using radial basis function neural network models

Similar documents
Mohsen Shanbeh, Hossein Hasani, and Somayeh Akhavan Tabatabaei

Modelling of tensile properties of needle-punched nonwovens using artificial neural networks

Application of Artificial Neural Networks in Evaluation and Identification of Electrical Loss in Transformers According to the Energy Consumption

Prediction of thermo-physiological properties of plated knits by different neural network architectures

POWER SYSTEM DYNAMIC SECURITY ASSESSMENT CLASSICAL TO MODERN APPROACH

Prediction of fabric hand characteristics using extraction principle

Combination of M-Estimators and Neural Network Model to Analyze Inside/Outside Bark Tree Diameters

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

NONLINEAR CLASSIFICATION AND REGRESSION. J. Elder CSE 4404/5327 Introduction to Machine Learning and Pattern Recognition

4. Multilayer Perceptrons

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Machine Learning. Neural Networks. (slides from Domingos, Pardo, others)

Computational statistics

Artificial Neural Networks

A Logarithmic Neural Network Architecture for Unbounded Non-Linear Function Approximation

Prediction of Yarn Strength from Fiber Properties Using Fuzzy ARTMAP

Multilayer Neural Networks

Artificial Neural Networks. Edward Gatt

Artifical Neural Networks

Learning and Memory in Neural Networks

Artificial Neural Networks

Midterm Review CS 6375: Machine Learning. Vibhav Gogate The University of Texas at Dallas

Course 395: Machine Learning - Lectures

Multilayer Perceptrons (MLPs)

ARTIFICIAL NEURAL NETWORK PART I HANIEH BORHANAZAD

Multilayer Neural Networks

Data Mining Part 5. Prediction

Machine Learning

Introduction to Neural Networks

Artificial Neural Network

A STATE-SPACE NEURAL NETWORK FOR MODELING DYNAMICAL NONLINEAR SYSTEMS

Prediction of polyester/cotton blended rotor-spun yarns hairiness based on the machine parameters

Introduction to Natural Computation. Lecture 9. Multilayer Perceptrons and Backpropagation. Peter Lewis

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Artificial Neural Networks" and Nonparametric Methods" CMPSCI 383 Nov 17, 2011!

Neural Networks and the Back-propagation Algorithm

Introduction Neural Networks - Architecture Network Training Small Example - ZIP Codes Summary. Neural Networks - I. Henrik I Christensen

CS:4420 Artificial Intelligence

Artificial Neural Networks (ANN) Xiaogang Su, Ph.D. Department of Mathematical Science University of Texas at El Paso

Statistical Machine Learning from Data

Estimation of Inelastic Response Spectra Using Artificial Neural Networks

FAST METHODS FOR EVALUATING THE ELECTRIC FIELD LEVEL IN 2D-INDOOR ENVIRONMENTS

Lab 5: 16 th April Exercises on Neural Networks

Neural networks. Chapter 19, Sections 1 5 1

In the Name of God. Lectures 15&16: Radial Basis Function Networks

An artificial neural networks (ANNs) model is a functional abstraction of the

Unit 8: Introduction to neural networks. Perceptrons

Artificial Neural Networks

ARTIFICIAL NEURAL NETWORK WITH HYBRID TAGUCHI-GENETIC ALGORITHM FOR NONLINEAR MIMO MODEL OF MACHINING PROCESSES

Comfort characteristics of textiles - Objective evaluation and prediction by soft computing techniques

A Feature Based Neural Network Model for Weather Forecasting

Artificial Neural Networks

Cheng Soon Ong & Christian Walder. Canberra February June 2018

Lecture 4: Perceptrons and Multilayer Perceptrons

Intelligent Modular Neural Network for Dynamic System Parameter Estimation

Radial-Basis Function Networks

International Journal of Scientific Research and Reviews

Advanced statistical methods for data analysis Lecture 2

USTER QUANTUM 3 APPLICATION REPORT. Comparison of the clearer generations THE YARN QUALITY ASSURANCE SYSTEM

Artificial Neural Networks (ANN)

Midterm Review CS 7301: Advanced Machine Learning. Vibhav Gogate The University of Texas at Dallas

Radial-Basis Function Networks

Predicting Air Permeability of Nylon Parachute Fabrics

ANN Control of Non-Linear and Unstable System and its Implementation on Inverted Pendulum

Explaining Results of Neural Networks by Contextual Importance and Utility

Tutorial on Machine Learning for Advanced Electronics

Neural Networks DWML, /25

MODELLING OF TOOL LIFE, TORQUE AND THRUST FORCE IN DRILLING: A NEURO-FUZZY APPROACH

Learning with Ensembles: How. over-tting can be useful. Anders Krogh Copenhagen, Denmark. Abstract

Neural Networks and Deep Learning

ESTIMATION OF HOURLY MEAN AMBIENT TEMPERATURES WITH ARTIFICIAL NEURAL NETWORKS 1. INTRODUCTION

Mr. Harshit K. Dave 1, Dr. Keyur P. Desai 2, Dr. Harit K. Raval 3

Portugaliae Electrochimica Acta 26/4 (2008)

USTER LABORATORY SYSTEMS

Artificial Intelligence

Feed-forward Networks Network Training Error Backpropagation Applications. Neural Networks. Oliver Schulte - CMPT 726. Bishop PRML Ch.

Lecture 5: Logistic Regression. Neural Networks

Multilayer Perceptrons and Backpropagation

Introduction Biologically Motivated Crude Model Backpropagation

Neural networks. Chapter 20. Chapter 20 1

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Investigation of Performance Properties of Graphene Coated Fabrics

Feedforward Neural Nets and Backpropagation

Simulating Neural Networks. Lawrence Ward P465A

RAINFALL RUNOFF MODELING USING SUPPORT VECTOR REGRESSION AND ARTIFICIAL NEURAL NETWORKS

This paper presents the

Forecasting of Rain Fall in Mirzapur District, Uttar Pradesh, India Using Feed-Forward Artificial Neural Network

Heterogeneous mixture-of-experts for fusion of locally valid knowledge-based submodels

COMPARING PERFORMANCE OF NEURAL NETWORKS RECOGNIZING MACHINE GENERATED CHARACTERS

Neural Networks. Volker Tresp Summer 2015

Christian Mohr

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

Machine Learning. Neural Networks

22c145-Fall 01: Neural Networks. Neural Networks. Readings: Chapter 19 of Russell & Norvig. Cesare Tinelli 1

Part 8: Neural Networks

Supporting Information

Old painting digital color restoration

Reading, UK 1 2 Abstract

From perceptrons to word embeddings. Simon Šuster University of Groningen

Transcription:

Neural Comput & Applic (7) 6: 39 45 DOI.7/s5-6-48-8 ORIGINAL ARTICLE Erhan Kenan C even Æ Sezai Tokat Æ O zcan O zdemir Prediction of chenille yarn and fabric abrasion resistance using radial basis function neural network models Received: 5 June 5 / Accepted: 3 March 6 / Published online: April 6 Ó Springer-Verlag London Limited 6 Abstract The abrasion resistance of chenille yarn is crucially important in particular because the effect sought is always that of the velvety feel of the pile. Thus, various methods have been developed to predict chenille yarn and fabric abrasion properties. Statistical models yielded reasonably good abrasion resistance predictions. However, there is a lack of study that encompasses the scope for predicting the chenille yarn abrasion resistance with artificial neural network (ANN) models. This paper presents an intelligent modeling methodology based on ANNs for predicting the abrasion resistance of chenille yarns and fabrics. Constituent chenille yarn parameters like yarn count, pile length, twist level and pile yarn material type are used as inputs to the model. The intelligent method is based on a special kind of ANN, which uses radial basis functions as activation functions. The predictive power of the ANN model is compared with different statistical models. It is shown that the intelligent model improves prediction performance with respect to statistical models. Keywords Prediction Æ Artificial neural networks Æ Radial basis functions Æ Chenille yarn Æ Abrasion resistance Abbreviations a k : Coefficient of the kth independent variable Æ n: Number of independent variables Æ b: Bias term of multiple-regression models Æ b i : Bias term of the ith hidden neuron Æ y : Fabric abrasion Æ y : Yarn E. K. C even Æ O.O zdemir Department of Textile Engineering, Uludag University Gorukle, Bursa 659, Turkey E-mail: rceven@uludag.edu.tr E-mail: ozdemir@uludag.edu.tr S. Tokat (&) Department of Computer Engineering, Pamukkale University, Bilgisayar Muhendisligi Bolumu, Morfoloji Binası, Kinikli, Denizli 7, Turkey E-mail: stokat@pamukkale.edu.tr Tel.: +9-58-343 Fax: +9-58-5538 abrasion Æ x : Yarn count Æ x : Pile length Æ x 3 : Twist level Æ x 4 : Pile yarn material type Æ c ik : Coefficient of the interaction term x ik Æ a: Column data vector Æ a: Normalized counterpart of vector a Æ m: Number of columns of a data vector a Æ ones(m,): m column vector with all elements as one Æ max(a): Maximum element of vector a Æ min(a): Minimum element of vector a Æ a: Normalized counterpart of data vector a Æ d i : Input vector of the radial basis function R i Æ v i : Center of the radial basis function R i Æ r i : Width of the radial basis function R i Æ w ji : Weight of the ith hidden unit for the jth output node Æ w ji : Weight of the ith input x i for the jth hidden unit Æ l: Learning rate Æ J: Cost function Æ DJ: Change in cost function Æ k: Adjusted parameter Introduction Chenille yarn is composed of two highly twisted core yarns and short lengths of a pile (effect) yarn which project out from the core to give a hairy effect [5]. The short lengths are called the pile, and the highly twisted yarns are called the core. The result is a yarn with a velvet-like or pile surface [7, 7]. Chenille yarns are used to produce special knitted and woven fabrics with high added value. A disadvantage of chenille yarn is its distinct weakness. Thus, it does not have a good inherent abrasion resistance. Any removal of the effect yarn forming the pile, either during further processing or during the eventual end-use, will expose the ground yarns, which in turn will result in a bare appearance []. Abrasion resistance, like other yarn properties, is chiefly influenced by fiber properties, yarn twist and yarn count [, 3, 5, 8]. The literature survey shows that there are few studies about the fundamental parameters that characterize chenille yarns. Statistical models (analysis of variance) yielded reasonably good abrasion resistance predictions [7]. It would be helpful if a prediction model could forecast chenille yarn and fabric abrasion resistance accurately.

4 A well-known method is to use statistical models for function approximation. Function approximation generalizes the experience from a small subset of examples to develop an approximation over a larger subset. In function approximation an unknown function is found using the observed data. Traditionally, linear or polynomial regression models were used. Artificial neural networks (ANNs), on the other hand, use a non-parametric estimation technique for the same purpose. ANNs are characterized by the fact that they have a universal approximation property and make it possible to work without an explicitly formulated mathematical model []. Selecting a statistical or intelligent method is a design criterion and a suitable selection results from the performance goal to be achieved [4]. The application of ANN tools to the optimization of textile manufacturing is an active research area. In recent years multi-layer ANN models with backpropagation learning have been widely used to predict various yarn properties. For instance, the breaking elongation of rotor-spun yarns and ring-spun cotton yarns has been predicted using statistical and ANN models in [3, 4], respectively. The relationships among fiber properties and their impacts on spinning performance and textile product quality have been predicted using ANNs [9, 8]. Zhu and Ethridge [] used ANN models for predicting ring or rotor yarn hairiness. Yarn strength is predicted using different process conditions and material parameters in [6, 9, ]. In general, error back-propagation multi-layer ANNs have been used in all of the above studies. The multilayer ANN structures have some disadvantages: the slow learning speed and the local minimal convergence behavior. To accelerate the learning speed of the error back-propagation algorithm, one method is to modify the error function to improve the error back-propagation algorithm [6]. Another alternative is to use radial basis function neural networks (), which have been widely used to represent nonlinear mappings between inputs and outputs of nonlinear systems []. In this work, models are used to predict the abrasion resistance of chenille yarns and chenille fabrics produced with these yarns. The performance of the model is compared with statistical models to denote the improvement. Gathering raw data Nm 4 and Nm 6 count chenille yarns were produced using viscose, acrylics with fiber fineness of.9 and.3 dtex, combed cotton, carded cotton and open end cotton pile yarn materials, with.7 and. mm pile lengths and with 7 and 85 turns/m twists. Acrylic core yarns were used for the production of chenille yarns. Upholstery fabrics were produced by using these yarns as filling yarn. For all fabrics, the same weave was chosen. Abrasion tests of the chenille fabric specimens were conducted on the Martindale wear and abrasion tester. Abrasion performances of chenille fabrics were determined in accordance with the test method ASTM D 4966 89 []. Fabrics were abraded to, cycles and mass loss ratios were determined. In order to observe the relationship between fabric and yarn abrasion resistance we designed a chenille yarn abrasion testing device by making some modifications on the Crockmeter that has been explained in an earlier paper [7]. The abrasion (% mass loss) values for chenille fabrics and chenille yarns are given in Table. 3 Statistical methods The most common type of multiple-regression is linear multiple-regression where a simple model relates several independent variables and a dependent variable with a straight line given as y k ¼ a x þ a x þþa n x n þ b; ðþ where a i are the coefficients of the independent variables and b is the bias term. The dependent variable y k is modeled by a linear relation. Nonlinear multiple-regression is similar to linear regression, except that a curve is fitted to the data set instead of fitting a straight line to the data set. Just as in the linear scenario, the sum of the squares of the horizontal and vertical distances between the line and the points is to be minimized. A nonlinear multiple-regression model is the polynomial regression model with interaction terms given as y k ¼ a x þþa n x n þ c x x þ þ c n x x n þþc n ;n x n x n þ b; ðþ where a i are the coefficients of the independent variables, c ij are the coefficients of the interaction terms, b is the bias term, and y k is the dependent variable. In this study for the aim of data regression, the linear model given in () and the nonlinear model with interaction terms given in () are used. In the related models, n=4 and the independent variables are yarn count (x ), pile length (x ), twist level (x 3 ) and pile yarn material type (x 4 ). The multiple-regression models given in (), () are multi-input single-input systems. Therefore, to obtain each dependent variable y k, a separate multiple-regression model is used. In this study the dependent variables that are predicted using the given models are fabric abrasion (mass loss, %) (y ) and yarn abrasion (mass loss, %) (y ). The coefficients for linear and nonlinear multiple-regression models are given in Table. 4 Intelligent method For the intelligent case, we aimed to obtain both dependent variables using an ANN structure. The ANN design process involves four steps. These are normalizing the

4 Table Abrasion (% mass loss) values for: (a) chenille fabrics, (b) chenille yarns [4] Yarn count (Nm) 4 6 Pile length (mm).7..7. Twist level (T/m) No 7 No 85 No 7 No 85 No 7 No 85 No 7 No 85 (a) Chenille fabric abrasion Material type Viscose 3.3 7 5. 3 7. 9 3.7 5 3. 3 5. 37 7. 43 4.4 Acrylic (.9 dtex) 7. 8 4.8 4 4.9.3 6 7. 3 3.8 38 4.3 44.7 Acrylic (.3 dtex) 3. 9 5.4 5 5.5 4.5 7 4. 33 4. 39.4 45 3.8 Combed cotton 4 6.4 4.9 6 5.5 3. 8 4. 34.4 4 3.8 46.3 Carded cotton 5 4. 3.5 7 3.7 3.4 9 3.7 35.4 4 3.4 47. Open end cotton 6 3.9 3. 8 3.4 4.7 3 3.6 36.6 4 3. 48.4 (b) Chenille yarn abrasion Material type Viscose 3. 7 8. 3.9 9 6.4 5 9.4 3 3.4 37 5.7 43.7 Acrylic (.9 dtex) 8.8 8 7. 4 7.9 5.7 6 3. 3.6 38.4 44 7.7 Acrylic (.3 dtex) 3. 9 8.4 5 9.8 6.3 7 6.9 33.3 39 4.4 45 9.8 Combed cotton 4 8.5 7.3 6 8. 5. 8 3.7 34 9. 4.6 46 6.8 Carded cotton 5 8. 6. 7 7. 3 4.3 9.7 35 8.5 4 9.7 47 6.3 Open end cotton 6 7.6 5.7 8 6.6 4 3.5 3. 36 7.4 4 8.7 48 5.6 gathered data, selecting the ANN architecture, training the network and testing the network. 4. Normalization The training and test data are chosen from the gathered data given in Table. The inputs are yarn count, pile length, twist level and pile yarn material type and the outputs are yarn abrasion and fabric abrasion. From Table, the data with chenille yarn numbers, 8, 5,, 9, 36, 37 and 44 are randomly selected as the test data and the rest are taken as the training data. Normalization should be done such that the higher values do not suppress the influence of the lower values and the symmetry of the activation function is retained. In this study, linear normalization is used by dividing each variable to its maximum value in the training and test data space. Thus, all inputs and outputs are normalized to the same range of values which are taken as [, ]. For a column vector a, the normalized data a having values in the interval [, ] can be obtained as a minðaþ:onesðm; Þ a ¼ ; ð3þ maxðaþ minðaþ where ones (m,) is a column vector with the same dimension as vector a having all elements as. Denormalization should be used to convert the network output back into the original units. For (3), the denormalization could be obtained as a ¼ a þ ðmaxðaþ minðaþþ þ minðaþonesðm;þ: ð4þ 4. Selecting the artificial neural network architecture Artificial neural networks can learn in real-time, and thus can adapt to changing environments flexibly, making reasonable, rational decisions based on incomplete and ambiguous information with intuitive thinking. A large number of ANN structures are proposed in the literature and these are available in various forms such as multi-layer perceptron, Hopfield net, Kohonen Table Coefficients for multiple-regression models: (a) linear model, (b) interaction model a a a 3 a 4 b (a) model y.9.98.59 y. -.37.96.98.88.657.7 a a a 3 a 4 c c 3 c 4 c 3 c 4 c 34 b (b) model y.8.98.63..4.9..53.74.9.656 y..6.97.9..38.94.4.7.3.73

4 self organizing map, etc []. The objective of all the above structures is to model the input/output behavior of the system in question. ANNs are trained so that a particular input leads to a specific output. In this study, is chosen as the ANN structure. s were first introduced in the solution of multi-variable interpolation problems and could be seen as a sensible alternative to the mentioned attempts to use complex polynomials for function approximation. The schematic diagram of the is given in Fig. with four inputs, p hidden units and two outputs. There is only one hidden layer that uses radial basis function (RBF) neurons. A function is RBF if its output depends on the distance of the input from a given stored vector. Different functions such as multiquadrics, inverse multiquadrics and biharmonics could be used as a RBF. A typical selection is a Gaussian function for which the output of the ith hidden unit is written as h R i ðd i Þ¼exp kd i v i k. i r i ; i ¼ ; ; :::; p; ð5þ where v i and r i are the centers and widths of the RBF at the ith hidden neuron, p is the number of hidden units and R i (.) is the ith hidden unit response with a single maximum at the origin and d < px is the weighted input vector where d i ¼½w i w i w 3i w 4i Š½x x x 3 x 4 Š T þ b i : ð6þ Here b i is the bias term of the ith hidden neuron. Thus, the RBF neuron computed by the ith hidden unit is maximum when the input vector d i is near the center v i of that unit. Simply, the output of a can be computed as a weighted sum of the hidden units. y j ¼ Xp i¼ w ji R iðd i Þ; ð7þ where w ji is the weight of the ith hidden unit for the jth output node. An is completely determined by choosing the dimension of the input output data, number of RBF neurons, values of v i, r i, w ji and w ji. The function approximation of is obtained by adjusting these parameters. The dimension of the input output data is problem dependent. The number of RBF neurons is a critical choice and, depending on the approach, can be determined a priori or adaptively. The values of v i, r i, w ji and w ji can be adjusted adaptively using a learning algorithm. A basic learning algorithm is the gradient descent technique where the learning rule takes the form Dk ¼ l @J @k ; ð8þ where l is the learning rate, k describes one of the adjusted parameters v i, r i, w ji, w ji,andjis the cost function. In practice, a common choice for the cost function is the root mean square error (RMSE) which is given as vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! u X n RMSE ¼ t y j y j d ð9þ n j¼ which is calculated for each output. In (9), y d j is the desired normalized output of the jth output. The actual output y j is calculated using the de-normalization equation given in (4). Training and testing the network for the gathered data and comparisons with statistical methods are given in the next section. 5 Simulation results Choosing the RBF neurons is an important step for the algorithm. Therefore, it is trained using different numbers of RBF neurons. The RMSE performances are given in Fig. for both the training and test data separately. As can be seen from Fig. a, the training performance improves as the number of RBF neurons is increased. However, the same observation is not valid for the test data. The test performance improves for a number of RBF neurons but later degrades visibly as it keeps increasing. The reason is that, if an excessive number of neurons are used, the ANN learns too much in the training process and the smoothness of the output function disappears. This overfitting phenomenon is a Fig. The structure of the proposed single layer perceptron ANN x x w b b d d R R w w w w y x x 3 4 w p4 w p b p d p R p w p w p y

43 (a) RMSE (b) 3.5 3.5.5.5 5 Fabric abrasion Yarn abrasion 5 5 5 3 35 number of RBF neurons (a) fabric abrasion (mass loss, %) 5 5 5 Fabric abrasion Yarn abrasion RMSE 5 5 5 5 5 3 35 number of RBF neurons Fig. Radial basis function neural networks performances for different numbers of RBF neurons: a training data, b test data critical problem in almost all ANN architectures. It is seen in Fig. a that the training performance improvement after RBF neurons is small. Also the test performance in Fig. b is at its optimum when the number of RBF neurons is between and 4. Thus, in order to have a minimum number of neurons, the number of RBF neurons is chosen as. The performances of the models are graphically presented in Figs. 3 and 4 for the training and test data, respectively. It is seen that the model best fits the training and test data. To show the improvement in more detail, a number of performance criteria are also investigated for comparison. These are the RMSE value, the correlation coefficient between the actual and predicted abrasions (R) and the mean absolute error (MAE). These values were used to judge the predictive power of various methods. The RMSE values, correlation coefficients and MAEs of the methods for both the dependent variables yarn abrasion (y ) and fabric abrasion (y ) are given in Table 3. When the intelligent method and the nonlinear multiple-regression methods are compared, it is seen from the table that both the training and test data have the best RMSE performance for the intelligent method that uses with RBF neurons. The higher the correlation coefficient R in magnitude, the better the predicted data fit the actual data. It is evident from Table 3 that the model has the highest R values which means that the model output best fits the actual data. Mean absolute error is an important performance criterion. It is the sum of the absolute errors divided by the number of observations, where absolute error is the (b) Yarn abrasion (mass loss, %) -5 8 6 4 8 6 4 absolute value of the difference between the predicted and actual data. MAE is in some sense the reciprocal of accuracy, because the closer the MAE is to zero, the more accurate the predictions are. In Table 3, has the smallest MAE values for both fabric abrasion and yarn abrasion and for both the training and test data. When the results for both the training and test data are compared, it is observed that both the RMSE and MAE values of the three methods for yarn abrasion were smaller than that for the fabric abrasion. And also R values of the three methods for yarn abrasion were higher than that for the fabric abrasion. 6 Conclusions 5 5 5 3 35 4 5 5 5 3 35 4 Fig. 3 Root mean square error performances of the methods for the training data: a fabric abrasion, b yarn abrasion In this study, abrasion resistances of chenille yarns and fabrics have been predicted using statistical and ANN models. The function approximation property of the

44 Table 3 Root mean square error performance of given statistical and intelligent methods for: (a) training data, (b) test data Fabric abrasion (y ) Yarn abrasion (y ) (a) Training data RMSE.737.483.57.8.33.8987 R.6736.746.839.94.946.963 MAE.7987.4845.48.537.7969.793 (b) Test data RMSE 5.78 5.37 4.8968.973.99.93 R.783.86.94.94.9483.9784 MAE 3.659 3.388.956.774.858.844 was made use of to analyze mass loss in the abrasion process. From the simulations, it has been found that the prediction performance of the intelligent model is better than the given statistical models. Performance criteria indicate that the prediction performance of the intelligent method is high despite the availability of only a relatively small data set for training, (a) fabric abrasion (mass loss, %) (b) Yarn abrasion (mass loss, %) 5 5 5-5 3 4 5 6 7 8 8 6 4 8 6 4 3 4 5 6 7 8 Fig. 4 Root mean square error performances of the methods for the test data: a fabric abrasion, b yarn abrasion and in each case the performance criteria are better than the given statistical models. As a result, it has been shown that such a method has a property of providing the most proper machine setting and material type selection before the production process. As further research study, new experimental work for predicting the abrasion properties of fabrics having different weave types could be suggested. References. ASTM D 4966 89 Standard Test Method for Abrasion Resistance of Textile Fabrics (Martindale Abrasion Tester Method) Annual Book of ASTM Standards Vol 7.. Brorens PH, Lappage J, Bedford J, Ranford SL (99) Studies on the abrasion resistance of weaving yarns. J Text Inst 8():6 34 3. Bryan ED, Yu C, Oxenham W (999) The abrasion characteristics of ring spun and open-end yarns. In: Proceedings of the th EFS research forum, pp 3 4. C even EK () An investigation about the effect of some production parameters on yarn properties at chenille spinning machines. Msc Thesis, The University of Uludag, pp 4 3 5. Cheng L, Adams DL (995) Yarn strength prediction using neural networks. Textile Res J 65(9):495 5 6. Cheng KPS, Lam HLI (3) Evaluating and comparing the physical properties of spliced yarns by regression and neural network techniques. Text Res J 73():6 64 7. Chenille Background Brochure () Chenille International Manufacturer s Association (CIMA), Italy, pp 3 8. Choi KF, Kim KL (4) Fiber segment length distribution on the yarn surface in relation to yarn abrasion resistance. Text Res J 74(7):63 66 9. Ethridge D, Zhu R (996) Prediction of rotor spun cotton yarn quality: a comparison of neural network and regression algorithm. In: Proceedings of the beltwide cotton conference, vol, pp 34 37. Gong RH, Wright RM () Fancy yarns, their manufacture and application. Woodhead Publishing Limited, UK. Haykin S (994) Neural networks: a comprehensive foundation. Macmillan Publishing Company, New York. Liu Y, Ying Y, Lu H (4) Optical method for predicting total soluble solids in pears using radial basis function networks. In: Proceedings of SPIE the international society for optical engineering, vol 5587, pp 98 3. Majumdar PK, Majumdar A (4) Predicting the breaking elongation of ring spun cotton yarns using mathematical, statistical, and artificial neural network models. Text Res J 74(7):65 655 4. Majumdar A, Majumdar P K, Sarkar B (5) Application of linear regression, artificial neural network and neuro-fuzzy algorithms to predict the breaking elongation of rotor-spun yarns. Indian J Fibre Textile Res 3():9 5

45 5. McIntyre JE, Daniels PN (995) Textile terms and definitions. The Textile Terms and Definitions Committee, Biddles Limited, UK 6. Oh S-H (997) Improving the error backpropagation algorithm with a modified error function. IEEE Trans Neural Netw 8(3):799 83 7. O zdemir O, C even E K (4) Influence of chenille yarn manufacturing parameters on yarn and upholstery fabric abrasion resistance. Text Res J 74(6):55 5 8. Pynckels F, Kiekens P, Sette S, Langenhove LV, Impe K (997) The use of neural nets to simulate the spinning process. J Text Inst 88(4):447 447 9. Rajamanickam R, Hansen SM, Jayaraman S (997) Analysis of the modeling methodologies for predicting the strength of airjet spun yarns. Text Res J 67():39 44. Ramesh MC, Rajamanickam R, Jayaraman S (995) Prediction of yarn tensile properties using artificial neural networks. J Text Inst 86:459 469. Zhu R, Ethridge D (997) Predicting hairiness for ring and rotor spun yarns and analyzing the impact of fiber properties. Text Res J 67:694 698