Predicting FTSE 100 Close Price Using Hybrid Model
|
|
- Cody Valentine Gilmore
- 5 years ago
- Views:
Transcription
1 SAI Intelligent Systes Conference 2015 Noveber 10-11, 2015 London, UK Predicting FTSE 100 Close Price Using Hybrid Model Bashar Al-hnaity, Departent of Electronic and Coputer Engineering, Brunel University London, UK, E-ail: Maysa Abbod, Departent of Electronic and Coputer Engineering, Brunel University London, UK, E-ail: Maher Alar raj, Departent of Electronic and Coputer Engineering, Brunel University London, UK, E-ail: Maher.Alar Abstract Prediction financial tie series (stock index price) is the ost challenging task. Support vector regression (SVR), Support vector achine (SVM) and back propagation neural network (BPNN) are the ost popular data ining techniques in prediction financial tie series. In this paper a hybrid cobination odel is introduced to cobine the three odels and to be ost beneficial of the all. Quantization factor is used in this paper for the first tie to iprove the single SVM and SVR prediction output. And also genetic algorith (GA) used to deterine the weights of the proposed odel. FTSE100 daily index closing price is used to evaluate the proposed odel perforance. The proposed hybrid odel nuerical results shows the outperfor result over all other single odel and the traditional siple average cobiner. Keywords genetic algorith; support vector regression; styling; support vector achine; Back propagation neural network; hybrid odel; FTSE 100 stock index. I. INTRODUCTION In finance doain, stock price prediction is considered as an iportant topic, where in recent years any researchers have paid a considerable attention. Stock price characteristics are high volatility, coplexity, dynaics, and turbulence and therefore stock price prediction is considered as a challenging task. Prediction the stock price has recorded any attepts using various ethodologies such as fundaental ethods, technical ethod and traditional tie series ethods. In fundaental ethods the corporation financial inforation are used to predict /forecast the supply, deand, profit, industry strength, anageent abilities and different intrinsic atters affecting the arket value and growth potential of a stock[1]. Corporations financial stateents, interi reports, historical financial trends and any forecasts concerning future growth, sales, and profits are included in fundaental, where investors believe that these points should rule the selection processes of stocks and tiing of sales. On the other hand technical ethod analysing recent and historical trends, cycles related to stock price issue and factors beyond that, such as dividend payents, trading volue, index trends, industry group trends and popularity, and volatility of stock of stock [2]. Historical financial inforation are not the solely inforation that technical ethod depend on, but therefore, in order to deterine changes in stock and in the arket analysts will surise upon other inforation such, recent trends in stock price changes, price and earning relationships, the activity volue of particular stock or industry, and other siilar indicators [3]. Traditional tie series ethod has different prediction/forecasting techniques to predict stock oveents such as, generalized autoregressive conditional heteroskedasticity (GARCH), autoregressive integrated oving average (ARIMA) and ultivariate regression. In the last decade coputational intelligence / data ining techniques have been introduced in to finance and it has been widely used in prediction stock price [4], [5]. According to [5],[6],[7],[8] neural network ability in learning and generalizing the nonlinear data trends is the ain reason of the wide utilization of its application to predict/ forecast stock prices. Moreover, the capability of adapting data patterns and building relationships between input and output can lead to achieve higher accuracy than traditional ethods [9], [10]. Support vector achine has great potential and superior perforance in practical applications. That can be the reason of the adaptation of the statically learning theory algorith with the principles of risk iniization structure. Therefore, this unique capability and abilities have drawn the attention of investors as well as researchers to utilize such technique in financial tie series prediction/ forecasting. Introducing the insensitivity loss function, regression odel, by Vanpanik [11] SVM becoe called support vector regression SVR. SVR has the ability to solve any proble regarding nonlinear estiation and financial tie series prediction so the attention of utilizing such a odel has also increased draatically. In this paper, the FTSE 100 daily closing price will be predicted. Different data ining techniques will be utilized to capture the non-liner characteristics in a stock price tie series. The proposed approach is to cobine the support vector achine SVM, support vector regression SVR and BPNN as the weight of these odels are deterined by the GA. The evaluation of the proposed odel perforance will be done by using it on the FTSE 100 daily closing price as the illustrative exaples. The below exaples show that the proposed odel outperfor all single odels and with regard to the possible changes in the data structure, proposed odel is shown to be ore robust. 1 P a g e
2 SAI Intelligent Systes Conference 2015 Noveber 10-11, 2015 London, UK The paper structure is as follow: in section 2 the coponent odels SVM, SVR and BPNN are briefly introduced. Section 3 describes the data pre-processing and the hybrid ethodology. The experiental result based on explained above data set presents in section 4. Finally conclusions and future work are in section 5. II. METHODOLOGY A. Back propagation neural network (BPNN) In odelling tie series with non-linear structures, the ost coonly used structure is three layers feed-forward back propagation [12]. The weights are deterined in back propagation process by building a connections aong the nodes based on data training, producing a least-ean-square error easure of the actual or desired and the estiated values fro the output of the neural network. The initial values are assigned for the connection weights. In order to update the weights, the error between the predicted and actual output values is back propagated via the network. Miniizing the error the desired and predicted output attepts happened after the procedure of supervised learning [13]. The architecture of this network contains a hidden layer of neurons with non-liner transfer function and an output layer of neurons with non-liner transfer function and an output layer of neurons with liner transfer functions. The below figure illustrate the architecture of a back propagation network, where x j (j = 1,2,, n) represent the input variables; z i (i = 1,2,., ) represent the outputs of neurons in the hidden layer; and y t (t = 1,2,, l) represent the outputs of the neural network [14]. n net i = j=0 w ji x j vi = 1,2,, (1) z i = f H (net i ) i = 1,2,.., (2) where net i is the activation value of the ith node, z i is the output of the hidden layer, and f H is called the activation function of a node, in the ost cases a sigoid function as bellow: f H(x)= 1 1+exp ( x) Second stage the output: The outputs of all neurons in the output layer are given as bellow function: y t = f t ( i=0 w it z i ) t = 1,2,, l (4) where f t (t = 1,2,., l) is the activation function, usually a line function. The weights assigned with rando values initially, and are odified by the delta rule according to the learning saples traditionally. The topology in this study deterine by a set of trial and error conducted to choose the best nuber of neurons experients, different range of 20 to 5 neurons in hidden layer two layer feed forward back propagation network were tried. The odel stopped training after reaching the pre-deterined nuber of epochs. The ideal topology was selected based on the lowest Mean Square Error. B. Support vector achine (SVM) SVM theory was developed by Vladiir Vapnik in It is considered as one of the ost iportant breakthroughs in achine learning field and can be applied in classification and regression [16]. In odelling the SVM, the ain goal is to select the optial hyperplane in high diensional space ensuring that the upper bound of generalization error is inial. SVM can only directly deal with linear saples but apping the original space into a higher diensional space can ake the analysis of nonlinear saple possible [17], [18]. For exaple if the data point(xi, yi), was given randoly and independently generated fro an unknown function, the approxiate function for by SVM is as follow: g(x) = w (x) + b. (x) Is the feature and nonlinear apped fro the input space x. w and b are both coefficients and can be estiated by iniizing the regularized risk function. (3) N R(C) = C 1 L(d N i=1 i, y i ) w 2 (5) Figure 1. Architecture of feed forward back propagation neural network [14] In theory neural network has the ability to siulate any kind of data pattern given a sufficient training. Training the neural network will deterine the perfect weight to achieve the correct outputs. The following steps illustrate the training process of updating the weights values [15]. First stage is hidden layers: the bellow function explains how the outputs of all neurons in the hidden layer are calculated: d y ε d y ε, L(d, y) = { (6) 0 other, In the above function both C and ε are prescribed paraeters. C is called the regularization constant while ε is referred to as the regularization constant. L(d, y) Is the intensive loss function and the ter c 1 N L(d N i, y i ) is the epirical error while the 1 2 w 2 indicates the flatness of the function. The trade-off between the epirical risk and flatness of the odel is easured by C. Since introducing positive slack variables ζ and ζ* the equation nuber 6 transfored to the following: R(w, ζ, ζ*) = 1 2 wwt N + C ( i=1( ζ, ζ*)) (7) i=1 2 P a g e
3 SAI Intelligent Systes Conference 2015 Noveber 10-11, 2015 London, UK Subject to: w (x i ) + b i d i ε + ζ i (8) d i w (x i ) b i ε + ζ i (9) ζ i, ζ i 0 (10) The decision function (kernel function) coes up finally after the Lagrange ultipliers are introduced and optiality constraints exploited. The below function is the for of kernel function: l f(x) = i(α i α i)k(x i, x j ) + b (11) where α i are Lagrange ultipliers. The satisfy the equalities α i α i = 0, α i 0, α i 0. The kernel value is the sae with the inner product of two vectors x i and x j in the feature space (x i ) and (x j ). The ost popular kernel function is Radial Basis Function (RBF) it is for in equation (12). K(x i, x j ) = exp ( γ x i x j 2 ) (12) Theoretical background, geoetric interpretation, unique solution and atheatical tractability are the ain advantages which has ade SVM attract researchers and investors interest and be applied to any application in different fields such prediction financial tie series. C. Support vector regression (SVR) As explained above SVM idea is to constructs a hyperplane or set of hyperplanes in a high or infinite diensional space, which can be used for classification. In the regression proble the sae argin concept used in SVM is used. The goal of solving regression proble is to construct a hyperplane that close to as any of the data points as possible. Choosing a hyperplane with sall nor is considered as a ain objective, while siultaneously iniizing the su of the distances fro the data points to the hyperplane [19]. In case of solving regression proble using SVM, SVM becae called support vector regression (SVR) where the ai is to find a function f with paraeters w and b by iniizing the following regression risk : R(f) = 1 N (x, w) + C l(f(x 2 i=1 i), y i ) (13) The above function C is a trade-off ter, the argin in SVM is the first ter which is used in easuring VC-diension [15]. f(x, w, b) = (w, (x)) + b, (14) In the function above (x): x Ω is kernel function, apping x into in the high diensional space. SVR and as proposed by [19]. The ε insensitive loss function is used as follows: 0, if y f(x) < ε l(y, f(x)) = { y f(x) ε, otherwise (15) Bellow constrained iniization proble is equivalent previous iniization in equation (13). Min y (w, b, ξ ( ) = 1 N (w, w) + C (ξ 2 i=1 i + ξ i ) (16) subject to: y i ((w, φ(x i )) + +b) ε + ξ i, (17) ((w, φ(x i )) + b) y i ε + ξ i, (18) ξ i 0 (20) In saple (x i, y i ) the ξ i and ξ i easure the up error and down error. Maxiizing the dual function or in other words construct the dual proble of this optiization proble (prial proble) by large ethod is a standard ethod to solve the above iniization proble. There are four coon kernel functions, in this study radial basis function (RBF) will be used. III. THE FRAMEWORK OF PREDICTION MODELS A. Hybrid odel Cobining different prediction techniques has been investigated widely in the literature. In the short range prediction cobining the various techniques is ore useful according to [20], [21]. [22] Stated in his study that using siple average ay work as well as ore sophisticated approaches. However, using one odel can produce ore accurate prediction than any other ethods. Siple averages would not be sufficient in such cases [22]. Copared with different prediction odels hybrid prediction ethod is based on a certain linear cobination. The assuption for the actual value in period t is y t (t = 1,2,.., )in prediction proble where the prediction of different types. If the prediction value in period t by odel i is f it (i = 1,2,, ), the corresponding prediction error will bee it = y t f it. And also the weight vector will bew = [w 1, w 2,, w ] T. Then in the hybrid odel the predicted value is coputed as follows [23], [24]: y t = i=1 w i f it (t = 1,2,, n) (21) i=1 w i = 1 (22) Equation (21) can be expressed in another for such: Y = FW (23) where Y = [y, 1 y 2,., y n ] T, F = [f it ] n. The error for the prediction odel can be fored as bellow equation: e t = y t y t = w i y t i=1 w i f it = i=1 i=1 w i (y t f it ) = i=1 w i e it (24) This study proposed a hybrid odel of cobining BPNN, SVM and SVR. Y cobined (t) = w 1 Y SVM (t) + w 2 Y SVR (t) (t = 1,2,, n) (25) The prediction values in period t arey cobined (t), w 1 Y SVM (t) and w 2 Y SVR (t) for the hybrid BPNN, SVM and SVR and w i (i = 1,2,3) are the assigned weight for BPNN, SVM and SVR odels. 3 P a g e
4 SAI Intelligent Systes Conference 2015 Noveber 10-11, 2015 London, UK The ost iportant step in developing a hybrid prediction odel is to deterine the perfect weight for each individual odel. Setting the w 1 = w 2 = w 3 = 1 3 is the siplest cobination ethod for the three prediction odels. Therefore, in any cases equal weights cannot achieve the best prediction result. Thus, this paper utilizes GA to deterine the optial weight for each prediction odel. Figure 2 illustrate the architecture of the hybrid prediction odel. Solving the optiization probles is one of ain issue that genetic algorith used to solve, in this paper GA utilized to deterine a set of optial weight for the hybrid odel. the optial and γ paraeters is by tried pairs of and γ and the best cross validation prediction of ean square error MSE is chosen. On the grid and after identifying a better region, on this region ore specific grid search can be conducted [25]. Finally after deterining these lowest cross validation prediction error paraeters and γ, choose these paraeters to be used in creation the prediction odels. As figure 3 illustrate the process of grid search algorith when building the SVM, SVR prediction odel. Figure 2. Architecture of hybrid odel B. Data pre-processing This paper exained FTSE100 index. The data were collected fro London stock exchange arket. Closing daily price was used. Data set covers a period of 03/01/1984 to 30/10/2014 (including 8038 points). The data set is divided into two sets, training and testing. The first 7788 data is used as a training data set and the last 250 is used as a testing data set. This study considers one-step-ahead prediction in order to prevent probles associated with cuulative error fro previous period for out of saple prediction. Selection and pre-processing the data are crucial step in any odelling effort, particularly for generalizing the new predictive odel. Data set are divided into two sets: for training datatr = [x (). x +5 ][x (+6) ], where is a rando nuber perutation 1 < < N Tr, N Tr is the data size. For testing data Ts = [x (n) x (n+5) ][x (x+6) ], where n is 1:N Ts,N Ts is the testing data size. C. Deterining the paraeters in SVM, SVR odels. This study used SVM and SVR to predict FTSE 100 daily closing price. If the paraeter of each odel set properly that could iprove the prediction output for each odel. In both odel RBF kernel function has been used, two paraeters should be deterine and γ in both odel. The ost coon approach to search the best and γ values is grid search approach [20]. Grid research approach uses a validation process in order to achieve the ability to produce good generalizations to decide paraeters. The criteria of choosing Figure 3. Grid search algorith for paraeters selection D. Quantization factor This paper adopts a new approach to enhance the prediction output of SVM and SVR odel. Quantization factor for the first tie introduces to SVM and SVR. In this paper factor has been added the SVM and SVR odel. As explained above in the ethodology section SVM and SVR the odel input is (xi, yi). After adding the optial factor which were deterine by trial and error, conducted to choose the optial factor fro a range of factor between The optial factor was selected based on the lowest ean square error. The below steps illustrate the changing on the input after introduce the quantization factor: Xpri = X i /factor. Yprie = y i /factor. The odel inputs becoe (Xprie i, Yprie i ). After applying the above odel SVM and SVR and to get the final prediction result the chosen factor ultiply with the output of each odel as below: Xpri_pred = Xpri factor. Yprie pred = Yprie factor. E. Algorith ipleentation SVR and SVM packages are available fro different resources such: Joachis [26], LibSVM which was proposed by Chih-jen Lin [27] and Steve Gunn Matlab Toolbox. This study will utilize LibSVM. 4 P a g e
5 SAI Intelligent Systes Conference 2015 Noveber 10-11, 2015 London, UK F. Prediction result evaluation In this paper ean square error (MSE) and root ean square error (RMSE) are used to evaluate the prediction perforance for BPNN, SVM, SVR and the hybrid odel. The below function are the fors of these easures: MSE= 1 N (x(t) x N t=1 (t)) 2 (26) RMSE= [ 1 N (x(t) x N t=1 (t)) 2 ] 1 2 (27) IV. EXPERIMENTS AND PREDICTION RESULTS A. Data sets This paper proposes a hybrid odel to predict the index closing price. In order to test the proposed odel, FTSE 100 index closing price for London stock arket is used. The data set cover the period for 03/01/1984 to 30/10/2014 figure4 show the tie series plot. There are 8038 data points in total. The first 7788 are used as training data set and the reaining 250 are used as a testing data set. Figure 5. The prediction result of FTSE 100 based on BPNN Figure 6. The prediction result of FTSE 100 based on SVR Figure 4. FTSE100 daily closing price B. Prediction results This paper proposes a hybrid ethod cobining three different odels SVM, SVR and BPNN to predict FTSE 100 stock index closing price. First step to build the proposed ethod is by perforing the single classical odels on the data set individually. Building a BPNN prediction odel by deterining the architecture as entioned in the ethodology section in BPNN part. BPNN topology has been set by trial and error techniques, two hidden layer feed forward back propagation network with only one hidden layer and a range of 20-5 is the range of neurons. 0.7 is the learning rate which gives the iniu testing error. Figure 5 illustrates the prediction results of BPNN. To build the SVR and SVM odel, the ethod which is addressed in the ethodology section and in the fraework prediction odel section to deterine the paraeters of both odel and the factor is applied. In both odel the paraeters and factor are as follow =100, γ= and the factor =17 with MSE Figure 6 shows the prediction results of SVR odel and figure 7 shows the prediction results of SVM odel. Figure 7. The prediction result of FTSE 100 based on SVM Figure 8 shows the proposed hybrid odel prediction results while figure 9 shows the prediction results fro using a basic cobiner average. The prediction perforance evaluated in this paper by copare the all five odel hybrid proposed odel, SVM odel, SVR odel, BPNN odel and the siple average cobination odel. Table I. Present the two perforance easures of MSE and RMSE of the five odels. It is obvious that the propose odel outperfor all odels and has uch less error than the rest odels. V. CONCLUSIONS This paper exaines the predictability of stock index tie series whereas the dynaics of such a data in real situations is 5 P a g e
6 SAI Intelligent Systes Conference 2015 Noveber 10-11, 2015 London, UK coplex and unknown. As a result using single classical odel will not produce the accurate prediction result. In this paper a hybrid cobination odel was introduced cobining BPNN, SVM and SVR. FTSE 100 daily closing price was used to evaluate the proposed odel perforance. The proposed odel was copared with different odels and as the table I showed that the hybrid odel outperfors all other odels. Fro the result that were achieved in this paper, the proposed hybrid odel outperfor the other three single odels, thus future work should revolve around different hybrid cobination odels paradig. Different singles odels can be added and the possibility of cobining the. Moreover, to test the odel robustness further extension of this study can be done by testing different dataset. Figure 8. The prediction result of FTSE 100 based on hybrid proposed odel Figure 9. The prediction result of FTSE 100 based on average siple cobination TABLE I. Prediction odels PREDICTION ERROR OF DIFFERENT MODELS MSE Prediction Error RMSE BPNN SVM SVR SA cobination odel Hybrid proposed odel * 45.46* REFERENCES [1] M. C. Thosett, Mastering Fundaental Analysis. Dearborn Financial Pub., [2] T. Bollerslev, "Generalized autoregressive conditional heteroskedasticity," J. Econ., vol. 31, pp , [3] A. Zellner and F. Pal, "Tie series analysis and siultaneous equation econoetric odels," J. Econ., vol. 2, pp , [4] A. Adebiyi, C. Ayo, M. Adebiyi and S. Otokiti, "An Iproved Stock Price Prediction using Hybrid Market Indicators," African Journal of Coputing & ICT, vol. 5, pp , 2012 [5] M. Khashei, M. Bijari and G. A. Raissi Ardali, "Iproveent of autoregressive integrated oving average odels using fuzzy logic and artificial neural networks (ANNs)," Neurocoputing, vol. 72, pp , [6] M. Khashei and M. Bijari, "An artificial neural network ( p, d, q) odel for tieseries forecasting," Expert Syst. Appl., vol. 37, pp , [7] E. Hajizadeh, H. D. Ardakani and J. Shahrabi, "Application of data ining techniques in stock arkets: A survey," Journal of Econoics and International Finance, vol. 2, pp , [8] C. S. Vui, G. K. Soon, C. K. On, R. Alfred and P. Anthony, "A review of stock arket prediction with artificial neural network (ANN)," in Control Syste, Coputing and Engineering (ICCSCE), 2013 IEEE International Conference on, 2013, pp [9] S. Soni, "Applications of ANNs in stock arket prediction: a survey," International Journal of Coputer Science & Engineering Technology, vol. 2, pp , [10] H. Yu and H. Liu, "Iproved stock arket prediction by cobining support vector achine and epirical ode decoposition," in Coputational Intelligence and Design (ISCID), 2012 Fifth International Syposiu on, 2012, pp [11] V. Vapnik, The Nature of Statistical Learning Theory. Springer Science & Business Media, [12] L. A. Diaz-Robles, J. C. Ortega, J. S. Fu, G. D. Reed, J. C. Chow, J. G. Watson and J. A. Moncada-Herrera, "A hybrid ARIMA and artificial neural networks odel to forecast particulate atter in urban areas: the case of Teuco, Chile," Atos. Environ., vol. 42, pp , [13] M. Kubat, Neural Networks: A Coprehensive Foundation by Sion Haykin, Macillan, 1994, ISBN , [14] Y. Wang, "Nonlinear neural network forecasting odel for stock index option price: Hybrid GJR GARCH approach," Expert Syst. Appl., vol. 36, pp , [15] D. Ö. Faruk, "A hybrid neural network and ARIMA odel for water quality tie series prediction," Eng Appl Artif Intell, vol. 23, pp , [16] C. Cortes and V. Vapnik, "Support-vector networks," Mach. Learning, vol. 20, pp , [17] M. Pontil and A. Verri, "Properties of support vector achines," Neural Coput., vol. 10, pp , [18] E. Osuna, R. Freund and F. Girosi, "Support vector achines: Training and applications," [19] Y. Xia, Y. Liu and Z. Chen, "Support vector regression for prediction of stock trend," in Inforation Manageent, Innovation Manageent and Industrial Engineering (ICIII), th International Conference on, 2013, pp [20] Y. Zhang and L. Wu, "Stock arket prediction of S&P 500 via cobination of iproved BCO approach and BP neural network," Expert Syst. Appl., vol. 36, pp , [21] J. S. Arstrong, "Cobining forecasts: The end of the beginning or the beginning of the end?" Int. J. Forecast., vol. 5, pp , [22] A. Tierann, "Forecast cobinations," Handbook of Econoic Forecasting, vol. 1, pp , [23] S. Gupta and P. C. Wilton, "Cobination of forecasts: An extension," Manageent Science, vol. 33, pp , 1987 [24] C. Christodoulos, C. Michalakelis and D. Varoutas, "Forecasting with liited data: Cobining ARIMA and diffusion odels," Technological Forecasting and Social Change, vol. 77, pp , [25] C. Hsu, C. Chang and C. Lin, A Practical Guide to Support Vector Classification, [26] T. Joachis, Making Large Scale SVM Learning Practical, [27] S. Soni, "Applications of ANNs in stock arket prediction: a survey," International Journal of Coputer Science & Engineering Technology, vol. 2, pp , P a g e
Intelligent Systems: Reasoning and Recognition. Perceptrons and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley osig 1 Winter Seester 2018 Lesson 6 27 February 2018 Outline Perceptrons and Support Vector achines Notation...2 Linear odels...3 Lines, Planes
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016 Lessons 7 14 Dec 2016 Outline Artificial Neural networks Notation...2 1. Introduction...3... 3 The Artificial
More informationSupport Vector Machines. Goals for the lecture
Support Vector Machines Mark Craven and David Page Coputer Sciences 760 Spring 2018 www.biostat.wisc.edu/~craven/cs760/ Soe of the slides in these lectures have been adapted/borrowed fro aterials developed
More informationKernel Methods and Support Vector Machines
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic
More informationSupport Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization
Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering
More informationPattern Classification using Simplified Neural Networks with Pruning Algorithm
Pattern Classification using Siplified Neural Networks with Pruning Algorith S. M. Karuzzaan 1 Ahed Ryadh Hasan 2 Abstract: In recent years, any neural network odels have been proposed for pattern classification,
More informationSupport Vector Machines. Maximizing the Margin
Support Vector Machines Support vector achines (SVMs) learn a hypothesis: h(x) = b + Σ i= y i α i k(x, x i ) (x, y ),..., (x, y ) are the training exs., y i {, } b is the bias weight. α,..., α are the
More informationIntelligent Systems: Reasoning and Recognition. Artificial Neural Networks
Intelligent Systes: Reasoning and Recognition Jaes L. Crowley MOSIG M1 Winter Seester 2018 Lesson 7 1 March 2018 Outline Artificial Neural Networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informatione-companion ONLY AVAILABLE IN ELECTRONIC FORM
OPERATIONS RESEARCH doi 10.1287/opre.1070.0427ec pp. ec1 ec5 e-copanion ONLY AVAILABLE IN ELECTRONIC FORM infors 07 INFORMS Electronic Copanion A Learning Approach for Interactive Marketing to a Custoer
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lessons 7 20 Dec 2017 Outline Artificial Neural networks Notation...2 Introduction...3 Key Equations... 3 Artificial
More informationModel Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon
Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential
More informationEnsemble Based on Data Envelopment Analysis
Enseble Based on Data Envelopent Analysis So Young Sohn & Hong Choi Departent of Coputer Science & Industrial Systes Engineering, Yonsei University, Seoul, Korea Tel) 82-2-223-404, Fax) 82-2- 364-7807
More informationCombining Classifiers
Cobining Classifiers Generic ethods of generating and cobining ultiple classifiers Bagging Boosting References: Duda, Hart & Stork, pg 475-480. Hastie, Tibsharini, Friedan, pg 246-256 and Chapter 10. http://www.boosting.org/
More informationQualitative Modelling of Time Series Using Self-Organizing Maps: Application to Animal Science
Proceedings of the 6th WSEAS International Conference on Applied Coputer Science, Tenerife, Canary Islands, Spain, Deceber 16-18, 2006 183 Qualitative Modelling of Tie Series Using Self-Organizing Maps:
More informationSupport Vector Machines. Machine Learning Series Jerry Jeychandra Blohm Lab
Support Vector Machines Machine Learning Series Jerry Jeychandra Bloh Lab Outline Main goal: To understand how support vector achines (SVMs) perfor optial classification for labelled data sets, also a
More informationUNIVERSITY OF TRENTO ON THE USE OF SVM FOR ELECTROMAGNETIC SUBSURFACE SENSING. A. Boni, M. Conci, A. Massa, and S. Piffer.
UIVRSITY OF TRTO DIPARTITO DI IGGRIA SCIZA DLL IFORAZIO 3823 Povo Trento (Italy) Via Soarive 4 http://www.disi.unitn.it O TH US OF SV FOR LCTROAGTIC SUBSURFAC SSIG A. Boni. Conci A. assa and S. Piffer
More informationForecasting Financial Indices: The Baltic Dry Indices
International Journal of Maritie, Trade & Econoic Issues pp. 109-130 Volue I, Issue (1), 2013 Forecasting Financial Indices: The Baltic Dry Indices Eleftherios I. Thalassinos 1, Mike P. Hanias 2, Panayiotis
More informationStochastic Subgradient Methods
Stochastic Subgradient Methods Lingjie Weng Yutian Chen Bren School of Inforation and Coputer Science University of California, Irvine {wengl, yutianc}@ics.uci.edu Abstract Stochastic subgradient ethods
More information1 Bounding the Margin
COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost
More informationCOS 424: Interacting with Data. Written Exercises
COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well
More informationSoft-margin SVM can address linearly separable problems with outliers
Non-linear Support Vector Machines Non-linearly separable probles Hard-argin SVM can address linearly separable probles Soft-argin SVM can address linearly separable probles with outliers Non-linearly
More informationŞtefan ŞTEFĂNESCU * is the minimum global value for the function h (x)
7Applying Nelder Mead s Optiization Algorith APPLYING NELDER MEAD S OPTIMIZATION ALGORITHM FOR MULTIPLE GLOBAL MINIMA Abstract Ştefan ŞTEFĂNESCU * The iterative deterinistic optiization ethod could not
More informationResearch Article Robust ε-support Vector Regression
Matheatical Probles in Engineering, Article ID 373571, 5 pages http://dx.doi.org/10.1155/2014/373571 Research Article Robust ε-support Vector Regression Yuan Lv and Zhong Gan School of Mechanical Engineering,
More informationNon-Parametric Non-Line-of-Sight Identification 1
Non-Paraetric Non-Line-of-Sight Identification Sinan Gezici, Hisashi Kobayashi and H. Vincent Poor Departent of Electrical Engineering School of Engineering and Applied Science Princeton University, Princeton,
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE227C (Spring 2018): Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee227c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee227c@berkeley.edu October
More informationMachine Learning Basics: Estimators, Bias and Variance
Machine Learning Basics: Estiators, Bias and Variance Sargur N. srihari@cedar.buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Basics
More informationSupport Vector Machines MIT Course Notes Cynthia Rudin
Support Vector Machines MIT 5.097 Course Notes Cynthia Rudin Credit: Ng, Hastie, Tibshirani, Friedan Thanks: Şeyda Ertekin Let s start with soe intuition about argins. The argin of an exaple x i = distance
More informationComputational and Statistical Learning Theory
Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher
More informationASSUME a source over an alphabet size m, from which a sequence of n independent samples are drawn. The classical
IEEE TRANSACTIONS ON INFORMATION THEORY Large Alphabet Source Coding using Independent Coponent Analysis Aichai Painsky, Meber, IEEE, Saharon Rosset and Meir Feder, Fellow, IEEE arxiv:67.7v [cs.it] Jul
More informationGrafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space
Journal of Machine Learning Research 3 (2003) 1333-1356 Subitted 5/02; Published 3/03 Grafting: Fast, Increental Feature Selection by Gradient Descent in Function Space Sion Perkins Space and Reote Sensing
More informationAn improved self-adaptive harmony search algorithm for joint replenishment problems
An iproved self-adaptive harony search algorith for joint replenishent probles Lin Wang School of Manageent, Huazhong University of Science & Technology zhoulearner@gail.co Xiaojian Zhou School of Manageent,
More informationCourse Notes for EE227C (Spring 2018): Convex Optimization and Approximation
Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,
More informationW-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS
W-BASED VS LATENT VARIABLES SPATIAL AUTOREGRESSIVE MODELS: EVIDENCE FROM MONTE CARLO SIMULATIONS. Introduction When it coes to applying econoetric odels to analyze georeferenced data, researchers are well
More informationCh 12: Variations on Backpropagation
Ch 2: Variations on Backpropagation The basic backpropagation algorith is too slow for ost practical applications. It ay take days or weeks of coputer tie. We deonstrate why the backpropagation algorith
More informationMulti-view Discriminative Manifold Embedding for Pattern Classification
Multi-view Discriinative Manifold Ebedding for Pattern Classification X. Wang Departen of Inforation Zhenghzou 450053, China Y. Guo Departent of Digestive Zhengzhou 450053, China Z. Wang Henan University
More informationZISC Neural Network Base Indicator for Classification Complexity Estimation
ZISC Neural Network Base Indicator for Classification Coplexity Estiation Ivan Budnyk, Abdennasser Сhebira and Kurosh Madani Iages, Signals and Intelligent Systes Laboratory (LISSI / EA 3956) PARIS XII
More informationExperimental Design For Model Discrimination And Precise Parameter Estimation In WDS Analysis
City University of New York (CUNY) CUNY Acadeic Works International Conference on Hydroinforatics 8-1-2014 Experiental Design For Model Discriination And Precise Paraeter Estiation In WDS Analysis Giovanna
More informationProc. of the IEEE/OES Seventh Working Conference on Current Measurement Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES
Proc. of the IEEE/OES Seventh Working Conference on Current Measureent Technology UNCERTAINTIES IN SEASONDE CURRENT VELOCITIES Belinda Lipa Codar Ocean Sensors 15 La Sandra Way, Portola Valley, CA 98 blipa@pogo.co
More informationInspection; structural health monitoring; reliability; Bayesian analysis; updating; decision analysis; value of information
Cite as: Straub D. (2014). Value of inforation analysis with structural reliability ethods. Structural Safety, 49: 75-86. Value of Inforation Analysis with Structural Reliability Methods Daniel Straub
More informationThe Algorithms Optimization of Artificial Neural Network Based on Particle Swarm
Send Orders for Reprints to reprints@benthascience.ae The Open Cybernetics & Systeics Journal, 04, 8, 59-54 59 Open Access The Algoriths Optiization of Artificial Neural Network Based on Particle Swar
More informationFeature Extraction Techniques
Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that
More informationPattern Recognition and Machine Learning. Learning and Evaluation for Pattern Recognition
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2017 Lesson 1 4 October 2017 Outline Learning and Evaluation for Pattern Recognition Notation...2 1. The Pattern Recognition
More informationBayes Decision Rule and Naïve Bayes Classifier
Bayes Decision Rule and Naïve Bayes Classifier Le Song Machine Learning I CSE 6740, Fall 2013 Gaussian Mixture odel A density odel p(x) ay be ulti-odal: odel it as a ixture of uni-odal distributions (e.g.
More informationMSEC MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL SOLUTION FOR MAINTENANCE AND PERFORMANCE
Proceeding of the ASME 9 International Manufacturing Science and Engineering Conference MSEC9 October 4-7, 9, West Lafayette, Indiana, USA MSEC9-8466 MODELING OF DEGRADATION PROCESSES TO OBTAIN AN OPTIMAL
More informationSymbolic Analysis as Universal Tool for Deriving Properties of Non-linear Algorithms Case study of EM Algorithm
Acta Polytechnica Hungarica Vol., No., 04 Sybolic Analysis as Universal Tool for Deriving Properties of Non-linear Algoriths Case study of EM Algorith Vladiir Mladenović, Miroslav Lutovac, Dana Porrat
More informationESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS. A Thesis. Presented to. The Faculty of the Department of Mathematics
ESTIMATING AND FORMING CONFIDENCE INTERVALS FOR EXTREMA OF RANDOM POLYNOMIALS A Thesis Presented to The Faculty of the Departent of Matheatics San Jose State University In Partial Fulfillent of the Requireents
More informationE0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis
E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds
More informationDomain-Adversarial Neural Networks
Doain-Adversarial Neural Networks Hana Ajakan, Pascal Gerain 2, Hugo Larochelle 3, François Laviolette 2, Mario Marchand 2,2 Départeent d inforatique et de génie logiciel, Université Laval, Québec, Canada
More informationIAENG International Journal of Computer Science, 42:2, IJCS_42_2_06. Approximation Capabilities of Interpretable Fuzzy Inference Systems
IAENG International Journal of Coputer Science, 4:, IJCS_4 6 Approxiation Capabilities of Interpretable Fuzzy Inference Systes Hirofui Miyajia, Noritaka Shigei, and Hiroi Miyajia 3 Abstract Many studies
More information2 Q 10. Likewise, in case of multiple particles, the corresponding density in 2 must be averaged over all
Lecture 6 Introduction to kinetic theory of plasa waves Introduction to kinetic theory So far we have been odeling plasa dynaics using fluid equations. The assuption has been that the pressure can be either
More informationCHARACTER RECOGNITION USING A SELF-ADAPTIVE TRAINING
CHARACTER RECOGNITION USING A SELF-ADAPTIVE TRAINING Dr. Eng. Shasuddin Ahed $ College of Business and Econoics (AACSB Accredited) United Arab Eirates University, P O Box 7555, Al Ain, UAE. and $ Edith
More informationExtension of CSRSM for the Parametric Study of the Face Stability of Pressurized Tunnels
Extension of CSRSM for the Paraetric Study of the Face Stability of Pressurized Tunnels Guilhe Mollon 1, Daniel Dias 2, and Abdul-Haid Soubra 3, M.ASCE 1 LGCIE, INSA Lyon, Université de Lyon, Doaine scientifique
More informationA Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine. (1900 words)
1 A Self-Organizing Model for Logical Regression Jerry Farlow 1 University of Maine (1900 words) Contact: Jerry Farlow Dept of Matheatics Univeristy of Maine Orono, ME 04469 Tel (07) 866-3540 Eail: farlow@ath.uaine.edu
More informationEstimation of ADC Nonlinearities from the Measurement in Input Voltage Intervals
Estiation of ADC Nonlinearities fro the Measureent in Input Voltage Intervals M. Godla, L. Michaeli, 3 J. Šaliga, 4 R. Palenčár,,3 Deptartent of Electronics and Multiedia Counications, FEI TU of Košice,
More informationBoosting with log-loss
Boosting with log-loss Marco Cusuano-Towner Septeber 2, 202 The proble Suppose we have data exaples {x i, y i ) i =... } for a two-class proble with y i {, }. Let F x) be the predictor function with the
More informationA Simple Regression Problem
A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where
More informationBootstrapping Dependent Data
Bootstrapping Dependent Data One of the key issues confronting bootstrap resapling approxiations is how to deal with dependent data. Consider a sequence fx t g n t= of dependent rando variables. Clearly
More informationA Smoothed Boosting Algorithm Using Probabilistic Output Codes
A Soothed Boosting Algorith Using Probabilistic Output Codes Rong Jin rongjin@cse.su.edu Dept. of Coputer Science and Engineering, Michigan State University, MI 48824, USA Jian Zhang jian.zhang@cs.cu.edu
More informationREDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION
ISSN 139 14X INFORMATION TECHNOLOGY AND CONTROL, 008, Vol.37, No.3 REDUCTION OF FINITE ELEMENT MODELS BY PARAMETER IDENTIFICATION Riantas Barauskas, Vidantas Riavičius Departent of Syste Analysis, Kaunas
More informationDepartment of Electronic and Optical Engineering, Ordnance Engineering College, Shijiazhuang, , China
6th International Conference on Machinery, Materials, Environent, Biotechnology and Coputer (MMEBC 06) Solving Multi-Sensor Multi-Target Assignent Proble Based on Copositive Cobat Efficiency and QPSO Algorith
More informationINTEGRATIVE COOPERATIVE APPROACH FOR SOLVING PERMUTATION FLOWSHOP SCHEDULING PROBLEM WITH SEQUENCE DEPENDENT FAMILY SETUP TIMES
8 th International Conference of Modeling and Siulation - MOSIM 10 - May 10-12, 2010 - Haaet - Tunisia Evaluation and optiization of innovative production systes of goods and services INTEGRATIVE COOPERATIVE
More informationNonmonotonic Networks. a. IRST, I Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I Povo (Trento) Italy
Storage Capacity and Dynaics of Nononotonic Networks Bruno Crespi a and Ignazio Lazzizzera b a. IRST, I-38050 Povo (Trento) Italy, b. Univ. of Trento, Physics Dept., I-38050 Povo (Trento) Italy INFN Gruppo
More informationPAC-Bayes Analysis Of Maximum Entropy Learning
PAC-Bayes Analysis Of Maxiu Entropy Learning John Shawe-Taylor and David R. Hardoon Centre for Coputational Statistics and Machine Learning Departent of Coputer Science University College London, UK, WC1E
More informationKinematics and dynamics, a computational approach
Kineatics and dynaics, a coputational approach We begin the discussion of nuerical approaches to echanics with the definition for the velocity r r ( t t) r ( t) v( t) li li or r( t t) r( t) v( t) t for
More informationA Simplified Analytical Approach for Efficiency Evaluation of the Weaving Machines with Automatic Filling Repair
Proceedings of the 6th SEAS International Conference on Siulation, Modelling and Optiization, Lisbon, Portugal, Septeber -4, 006 0 A Siplified Analytical Approach for Efficiency Evaluation of the eaving
More informationIN modern society that various systems have become more
Developent of Reliability Function in -Coponent Standby Redundant Syste with Priority Based on Maxiu Entropy Principle Ryosuke Hirata, Ikuo Arizono, Ryosuke Toohiro, Satoshi Oigawa, and Yasuhiko Takeoto
More informationBlock designs and statistics
Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent
More informationINTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN
INTELLECTUAL DATA ANALYSIS IN AIRCRAFT DESIGN V.A. Koarov 1, S.A. Piyavskiy 2 1 Saara National Research University, Saara, Russia 2 Saara State Architectural University, Saara, Russia Abstract. This article
More informationAN IMPROVED FUZZY TIME SERIES THEORY WITH APPLICATIONS IN THE SHANGHAI CONTAINERIZED FREIGHT INDEX
Journal of Marine Science and Technology, Vol 5, No, pp 393-398 (01) 393 DOI: 106119/JMST-01-0313-1 AN IMPROVED FUZZY TIME SERIES THEORY WITH APPLICATIONS IN THE SHANGHAI CONTAINERIZED FREIGHT INDEX Ming-Tao
More informationOnline Publication Date: 19 April 2012 Publisher: Asian Economic and Social Society
Online Publication Date: April Publisher: Asian Econoic and Social Society Using Entropy Working Correlation Matrix in Generalized Estiating Equation for Stock Price Change Model Serpilılıç (Faculty of
More informationPredictive Vaccinology: Optimisation of Predictions Using Support Vector Machine Classifiers
Predictive Vaccinology: Optiisation of Predictions Using Support Vector Machine Classifiers Ivana Bozic,2, Guang Lan Zhang 2,3, and Vladiir Brusic 2,4 Faculty of Matheatics, University of Belgrade, Belgrade,
More informationSoft Computing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis
Soft Coputing Techniques Help Assign Weights to Different Factors in Vulnerability Analysis Beverly Rivera 1,2, Irbis Gallegos 1, and Vladik Kreinovich 2 1 Regional Cyber and Energy Security Center RCES
More informationOptimum Value of Poverty Measure Using Inverse Optimization Programming Problem
International Journal of Conteporary Matheatical Sciences Vol. 14, 2019, no. 1, 31-42 HIKARI Ltd, www.-hikari.co https://doi.org/10.12988/ijcs.2019.914 Optiu Value of Poverty Measure Using Inverse Optiization
More informationLecture 9: Multi Kernel SVM
Lecture 9: Multi Kernel SVM Stéphane Canu stephane.canu@litislab.eu Sao Paulo 204 April 6, 204 Roadap Tuning the kernel: MKL The ultiple kernel proble Sparse kernel achines for regression: SVR SipleMKL:
More informationNyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison
yströ Method vs : A Theoretical and Epirical Coparison Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, Zhi-Hua Zhou Machine Learning Lab, GE Global Research, San Raon, CA 94583 Michigan State University,
More informationAn Improved Particle Filter with Applications in Ballistic Target Tracking
Sensors & ransducers Vol. 72 Issue 6 June 204 pp. 96-20 Sensors & ransducers 204 by IFSA Publishing S. L. http://www.sensorsportal.co An Iproved Particle Filter with Applications in Ballistic arget racing
More informationDEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS
ISSN 1440-771X AUSTRALIA DEPARTMENT OF ECONOMETRICS AND BUSINESS STATISTICS An Iproved Method for Bandwidth Selection When Estiating ROC Curves Peter G Hall and Rob J Hyndan Working Paper 11/00 An iproved
More informationInteractive Markov Models of Evolutionary Algorithms
Cleveland State University EngagedScholarship@CSU Electrical Engineering & Coputer Science Faculty Publications Electrical Engineering & Coputer Science Departent 2015 Interactive Markov Models of Evolutionary
More informationGeometrical intuition behind the dual problem
Based on: Geoetrical intuition behind the dual proble KP Bennett, EJ Bredensteiner, Duality and Geoetry in SVM Classifiers, Proceedings of the International Conference on Machine Learning, 2000 1 Geoetrical
More informationUsing EM To Estimate A Probablity Density With A Mixture Of Gaussians
Using EM To Estiate A Probablity Density With A Mixture Of Gaussians Aaron A. D Souza adsouza@usc.edu Introduction The proble we are trying to address in this note is siple. Given a set of data points
More informationCHAPTER 2 LITERATURE SURVEY, POWER SYSTEM PROBLEMS AND NEURAL NETWORK TOPOLOGIES
7 CHAPTER LITERATURE SURVEY, POWER SYSTEM PROBLEMS AND NEURAL NETWORK TOPOLOGIES. GENERAL This chapter presents detailed literature survey, briefly discusses the proposed probles, the chosen neural architectures
More informationCS Lecture 13. More Maximum Likelihood
CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood
More informationSharp Time Data Tradeoffs for Linear Inverse Problems
Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used
More informationThis model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.
CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when
More informationNBN Algorithm Introduction Computational Fundamentals. Bogdan M. Wilamoswki Auburn University. Hao Yu Auburn University
NBN Algorith Bogdan M. Wilaoswki Auburn University Hao Yu Auburn University Nicholas Cotton Auburn University. Introduction. -. Coputational Fundaentals - Definition of Basic Concepts in Neural Network
More informationHIGH RESOLUTION NEAR-FIELD MULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR MACHINES
ICONIC 2007 St. Louis, O, USA June 27-29, 2007 HIGH RESOLUTION NEAR-FIELD ULTIPLE TARGET DETECTION AND LOCALIZATION USING SUPPORT VECTOR ACHINES A. Randazzo,. A. Abou-Khousa 2,.Pastorino, and R. Zoughi
More informationEMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS
EMPIRICAL COMPLEXITY ANALYSIS OF A MILP-APPROACH FOR OPTIMIZATION OF HYBRID SYSTEMS Jochen Till, Sebastian Engell, Sebastian Panek, and Olaf Stursberg Process Control Lab (CT-AST), University of Dortund,
More informationLost-Sales Problems with Stochastic Lead Times: Convexity Results for Base-Stock Policies
OPERATIONS RESEARCH Vol. 52, No. 5, Septeber October 2004, pp. 795 803 issn 0030-364X eissn 1526-5463 04 5205 0795 infors doi 10.1287/opre.1040.0130 2004 INFORMS TECHNICAL NOTE Lost-Sales Probles with
More informationUsing a De-Convolution Window for Operating Modal Analysis
Using a De-Convolution Window for Operating Modal Analysis Brian Schwarz Vibrant Technology, Inc. Scotts Valley, CA Mark Richardson Vibrant Technology, Inc. Scotts Valley, CA Abstract Operating Modal Analysis
More informationTEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES
TEST OF HOMOGENEITY OF PARALLEL SAMPLES FROM LOGNORMAL POPULATIONS WITH UNEQUAL VARIANCES S. E. Ahed, R. J. Tokins and A. I. Volodin Departent of Matheatics and Statistics University of Regina Regina,
More informationWhen Short Runs Beat Long Runs
When Short Runs Beat Long Runs Sean Luke George Mason University http://www.cs.gu.edu/ sean/ Abstract What will yield the best results: doing one run n generations long or doing runs n/ generations long
More informationKeywords: Estimator, Bias, Mean-squared error, normality, generalized Pareto distribution
Testing approxiate norality of an estiator using the estiated MSE and bias with an application to the shape paraeter of the generalized Pareto distribution J. Martin van Zyl Abstract In this work the norality
More informationPh 20.3 Numerical Solution of Ordinary Differential Equations
Ph 20.3 Nuerical Solution of Ordinary Differential Equations Due: Week 5 -v20170314- This Assignent So far, your assignents have tried to failiarize you with the hardware and software in the Physics Coputing
More informationPattern Recognition and Machine Learning. Artificial Neural networks
Pattern Recognition and Machine Learning Jaes L. Crowley ENSIMAG 3 - MMIS Fall Seester 2016/2017 Lessons 9 11 Jan 2017 Outline Artificial Neural networks Notation...2 Convolutional Neural Networks...3
More informationACTIVE VIBRATION CONTROL FOR STRUCTURE HAVING NON- LINEAR BEHAVIOR UNDER EARTHQUAKE EXCITATION
International onference on Earthquae Engineering and Disaster itigation, Jaarta, April 14-15, 8 ATIVE VIBRATION ONTROL FOR TRUTURE HAVING NON- LINEAR BEHAVIOR UNDER EARTHQUAE EXITATION Herlien D. etio
More informationA Comparative Study of Parametric and Nonparametric Regressions
Iranian Econoic Review, Vol.16, No.30, Fall 011 A Coparative Study of Paraetric and Nonparaetric Regressions Shahra Fattahi Received: 010/08/8 Accepted: 011/04/4 Abstract his paper evaluates inflation
More informationEfficient Filter Banks And Interpolators
Efficient Filter Banks And Interpolators A. G. DEMPSTER AND N. P. MURPHY Departent of Electronic Systes University of Westinster 115 New Cavendish St, London W1M 8JS United Kingdo Abstract: - Graphical
More informationarxiv: v3 [cs.lg] 7 Jan 2016
Efficient and Parsionious Agnostic Active Learning Tzu-Kuo Huang Alekh Agarwal Daniel J. Hsu tkhuang@icrosoft.co alekha@icrosoft.co djhsu@cs.colubia.edu John Langford Robert E. Schapire jcl@icrosoft.co
More informationDetermining OWA Operator Weights by Mean Absolute Deviation Minimization
Deterining OWA Operator Weights by Mean Absolute Deviation Miniization Micha l Majdan 1,2 and W lodziierz Ogryczak 1 1 Institute of Control and Coputation Engineering, Warsaw University of Technology,
More informationThe proofs of Theorem 1-3 are along the lines of Wied and Galeano (2013).
A Appendix: Proofs The proofs of Theore 1-3 are along the lines of Wied and Galeano (2013) Proof of Theore 1 Let D[d 1, d 2 ] be the space of càdlàg functions on the interval [d 1, d 2 ] equipped with
More information