A New Method of DDB Logical Structure Synthesis Using Distributed Tabu Search

Similar documents
MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model

Radial Basis Function Networks: Algorithms

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Computer arithmetic. Intensive Computation. Annalisa Massini 2017/2018

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

DETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.

Analysis of execution time for parallel algorithm to dertmine if it is worth the effort to code and debug in parallel

Estimation of the large covariance matrix with two-step monotone missing data

John Weatherwax. Analysis of Parallel Depth First Search Algorithms

An Ant Colony Optimization Approach to the Probabilistic Traveling Salesman Problem

SIMULATED ANNEALING AND JOINT MANUFACTURING BATCH-SIZING. Ruhul SARKER. Xin YAO

Approximating min-max k-clustering

Linear diophantine equations for discrete tomography

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests

The Graph Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule

Multi-Operation Multi-Machine Scheduling

Research of PMU Optimal Placement in Power Systems

DIFFERENTIAL evolution (DE) [3] has become a popular

LINEAR SYSTEMS WITH POLYNOMIAL UNCERTAINTY STRUCTURE: STABILITY MARGINS AND CONTROL

Parallel Quantum-inspired Genetic Algorithm for Combinatorial Optimization Problem

GIVEN an input sequence x 0,..., x n 1 and the

FE FORMULATIONS FOR PLASTICITY

On the Chvatál-Complexity of Knapsack Problems

Using the Divergence Information Criterion for the Determination of the Order of an Autoregressive Process

MATHEMATICAL MODELLING OF THE WIRELESS COMMUNICATION NETWORK

Elliptic Curves and Cryptography

New Schedulability Test Conditions for Non-preemptive Scheduling on Multiprocessor Platforms

A Parallel Algorithm for Minimization of Finite Automata

arxiv: v1 [physics.data-an] 26 Oct 2012

EXACTLY PERIODIC SUBSPACE DECOMPOSITION BASED APPROACH FOR IDENTIFYING TANDEM REPEATS IN DNA SEQUENCES

State Estimation with ARMarkov Models

Information collection on a graph

Notes on Instrumental Variables Methods

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition

arxiv: v2 [quant-ph] 2 Aug 2012

NUMERICAL AND THEORETICAL INVESTIGATIONS ON DETONATION- INERT CONFINEMENT INTERACTIONS

COMPARISON OF VARIOUS OPTIMIZATION TECHNIQUES FOR DESIGN FIR DIGITAL FILTERS

Multivariable Generalized Predictive Scheme for Gas Turbine Control in Combined Cycle Power Plant

A STUDY ON THE UTILIZATION OF COMPATIBILITY METRIC IN THE AHP: APPLYING TO SOFTWARE PROCESS ASSESSMENTS

For q 0; 1; : : : ; `? 1, we have m 0; 1; : : : ; q? 1. The set fh j(x) : j 0; 1; ; : : : ; `? 1g forms a basis for the tness functions dened on the i

Universal Finite Memory Coding of Binary Sequences

Mathematical Efficiency Modeling of Static Power Converters

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression

Eigenanalysis of Finite Element 3D Flow Models by Parallel Jacobi Davidson

Churilova Maria Saint-Petersburg State Polytechnical University Department of Applied Mathematics

Logistics Optimization Using Hybrid Metaheuristic Approach under Very Realistic Conditions

Fig. 4. Example of Predicted Workers. Fig. 3. A Framework for Tackling the MQA Problem.

Topology Optimization of Three Dimensional Structures under Self-weight and Inertial Forces

An Analysis of Reliable Classifiers through ROC Isometrics

A Network-Flow Based Cell Sizing Algorithm

Algorithms for Air Traffic Flow Management under Stochastic Environments

Neural network models for river flow forecasting

4. Score normalization technical details We now discuss the technical details of the score normalization method.

Proof Nets and Boolean Circuits

The Recursive Fitting of Multivariate. Complex Subset ARX Models

Information collection on a graph

PERFORMANCE BASED DESIGN SYSTEM FOR CONCRETE MIXTURE WITH MULTI-OPTIMIZING GENETIC ALGORITHM

Adaptive estimation with change detection for streaming data

Availability and Maintainability. Piero Baraldi

MODULAR LINEAR TRANSVERSE FLUX RELUCTANCE MOTORS

Fault Tolerant Quantum Computing Robert Rogers, Thomas Sylwester, Abe Pauls

SAT based Abstraction-Refinement using ILP and Machine Learning Techniques

Metrics Performance Evaluation: Application to Face Recognition

Session 5: Review of Classical Astrodynamics

Discrete Particle Swarm Optimization for Optimal DG Placement in Distribution Networks

Minimax Design of Nonnegative Finite Impulse Response Filters

Efficient Approximations for Call Admission Control Performance Evaluations in Multi-Service Networks

A generalization of Amdahl's law and relative conditions of parallelism

Finding a sparse vector in a subspace: linear sparsity using alternating directions

STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models

Improved Capacity Bounds for the Binary Energy Harvesting Channel

On Line Parameter Estimation of Electric Systems using the Bacterial Foraging Algorithm

Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition

An Algorithm of Two-Phase Learning for Eleman Neural Network to Avoid the Local Minima Problem

COMPLEX-VALUED ASSOCIATIVE MEMORIES WITH PROJECTION AND ITERATIVE LEARNING RULES

Finite-State Verification or Model Checking. Finite State Verification (FSV) or Model Checking

Lower Confidence Bound for Process-Yield Index S pk with Autocorrelated Process Data

+++ Modeling of Structural-dynamic Systems by UML Statecharts in AnyLogic +++ Modeling of Structural-dynamic Systems by UML Statecharts in AnyLogic

Feedback-error control

Section 0.10: Complex Numbers from Precalculus Prerequisites a.k.a. Chapter 0 by Carl Stitz, PhD, and Jeff Zeager, PhD, is available under a Creative

Estimation of component redundancy in optimal age maintenance

GOOD MODELS FOR CUBIC SURFACES. 1. Introduction

Discrete-time Geo/Geo/1 Queue with Negative Customers and Working Breakdowns

0.6 Factoring 73. As always, the reader is encouraged to multiply out (3

Convex Optimization methods for Computing Channel Capacity

Construction of High-Girth QC-LDPC Codes

Asymptotically Optimal Simulation Allocation under Dependent Sampling

A PROBABILISTIC POWER ESTIMATION METHOD FOR COMBINATIONAL CIRCUITS UNDER REAL GATE DELAY MODEL

On Fractional Predictive PID Controller Design Method Emmanuel Edet*. Reza Katebi.**

Robust Predictive Control of Input Constraints and Interference Suppression for Semi-Trailer System

Robust Performance Design of PID Controllers with Inverse Multiplicative Uncertainty

End-to-End Delay Minimization in Thermally Constrained Distributed Systems

Optimal Design of Truss Structures Using a Neutrosophic Number Optimization Model under an Indeterminate Environment

Periodic scheduling 05/06/

Scaling Multiple Point Statistics for Non-Stationary Geostatistical Modeling

Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition

Transcription:

A New Method of DDB Logical Structure Synthesis Using Distributed Tabu Search Eduard Babkin and Margarita Karunina 2, National Research University Higher School of Economics Det of nformation Systems and Technologies, Bol Pecherskaya, 25, 6355 Nizhny Novgorod, Russia eababkin@hseru 2 karunina-margarita@yandexru Abstract n this aer we roose a arallel tabu search algorithm based on the consecutive tabu algorithm constructed by us earlier to solve the roblem of the distributed database otimal logical structure synthesis Also we rovide a reader with some information about the erformance of our new arallel algorithm and the quality of the solutions obtained with hel of it Keywords: Neural networks, tabu search, genetic algorithms, arallel rogramming, distributed databases ntroduction The roblems of decomosition of comlex data structures lay an extremely imortant role in many critical alications varying from cloud comuting to distributed databases (DDB) [7] n later class of alications that roblem is usually formulated as synthesis of otimal logical structure (OLS) n accordance with [3] it consists of two stages The first stage is decomosition of data elements (DE) into logical record (LR) tyes The second stage is irredundant relacement of LR tyes in the comuting network For each stage various domain-secific constraints are introduced (like irredundant allocation, semantic contiguity of data elements, available external storage) as well as otimum criteria are secified n our work the criterion function is secified as a minimum of total time needed for consecutive rocessing of a set of DDB users queries [8] From mathematical oint of view the secified roblem is a NP-comlete nonlinear otimization roblem of integer rogramming So far different task-secific aroaches were roosed such as branch-and-bound method with a set of heuristics (BBM) [8], robabilistic algorithms, etc However not many of them exloit benefits of arallel rocessing and grid technologies [4], [5], [6], [9]

2 n revious works the authors develoed the exact mathematical formalization of the OLS roblem and offered the sequential tabu search (TS) algorithm which used different Tabu Machines (TMs) for each stage of the solution [], [2] The constructed algorithm roduced solutions with good quality, but it was comutationally efficient for small mock-u roblems n the resent article we roose a new distributed model of TM (DTM) and a comutationally efficient arallel algorithm for solutions of comlex data structure decomosition roblems The article has the following structure n Section 2 we outline critical elements of TM n Section 3 general descrition of DTM algorithm is given and Section 4 secifies it in details Section 5 briefly describes evaluation of the obtained arallel algorithm Overview of the results in Section 6 concludes the article 2 Short Overview of Tabu Machine Model and Dynamics n our work we use the generic model of TM as it was secified by Minghe Sun and Hamid R Nemati [] with the following imortant constituents S = { s,, s n } is the current state of the TM, it is collectively determined by the states of its nodes S = { s,, s n } is the state of the TM with the minimum energy among all states which are obtained by the current moment within the local eriod (or within the short term memory rocess (STMP)) S = { s,, s n } is the state of the TM with the minimum energy among all states which are obtained by the current moment (within both the STMP and the long term memory rocess (LTMP)) T = { t,, t n } is a vector to check the tabu condition E( S ) is the TM energy corresonding to the state S E( S ) is the TM energy corresonding to the state S E( S ) is the TM energy corresonding to the state S k is the number of iterations (ie the number of neural network (NN) transitions from the one state to another) from the outset of the TM functioning h is the number of iterations from the last renewal the value of E( S ) within the STMP c is the number of the LTMPs carried out by the current moment The following variables stand as arameters of the TM-algorithm: l the tabu size, β the arameter determining the termination criterion of the STMP, C maximum number of the available LTMPs inside the TM-algorithm The state transition mechanism of the TM is governed by TS and erformed until the redefined stoing rule is satisfied Let s name this sequence of state transitions as a work eriod of the TM t is advisable to run the TM for several work eriods t is better to begin a new work eriod of the TM using information taken from the revious work eriods, from a history of the TM work by alying a LTMP n

3 such a case a TS algorithm finds a node which has not changed its state for the longest time among all neurons of the TM And then this node is forced to switch its state 3 A Consecutive TM-Algorithm for OLS Problem As [3] states, the general roblem of DDB OLS synthesis consists of two stages Comosition of logical record (LR) tyes from data elements (DE) using the constraints on: the number of elements in the LR tye; single elements inclusion in the LR tye; the required level of information safety of the system n addition, LR tyes synthesis should take into account semantic contiguity of DE 2 rredundant allocation of LR tyes among the nodes in the comuting network using the constraints on: irredundant allocation of LR tyes; the length of the formed LR tye on each host; the total number of the synthesized LR tyes laced on each host; the volume of accessible external memory of the hosts for storage of local databases; the total rocessing time of oerational queries on the hosts The synthesis obective is to minimize the total time needed for consecutive rocessing of a set of DDB users queries Such roblem has an exact but a very large mathematical formalization So, we rovide it in the Aendix and Aendix of this aer due to its limited size and should refer to [], [2], [3], [8] for further details n our revious work [2] we have offered a new method for formalization of the described roblem in the terms of TM and have constructed TMs energy functions as follows TM for the first stage consists of one layer of neurons, connected by 2 comlete bidirectional links The number of neurons in the layer is equal to, where is the number of DEs Each neuron is sulied with two indexes corresonding to numbers of DEs and LRs For examle, OUT xi = means, that the DE x will be included to the i -th LR All oututs OUT xi of a network have a binary nature, ie accet values from set {,} The following TM energy function for LR comosition was roosed: g E = A δxy ( δ i ) + B δi ( δxy ) ( 2 axy ) D δi 2 i= = x= y= B C incom gr + incom gr OUT OUT + a + OUT g ( ) ( ) xy yx xi y 2 xy 2 2 i= x= y= Fi xi y x () Here ( ) ( ) ( g, ) ( w = A δ δ + B δ δ 2 a D δ incom _ gr + xi y xy i i xy xy i xy + incom _ gryx ) are weights of neurons, B g T ( a ) 2 neurons thresholds xi C = xy + are the 2 y= 2 F i y x

4 For the second stage of irredundant LR allocation we offered TM with the same structure as TM for LR comosition, but the number of neurons in the layer is equal to T R, where T is the number of LRs, synthesized during LR comosition, R is the number of the hosts available for LR allocation As a result of constraints translation into the terms of TM the following TM energy function for the LR allocation was obtained: R R T T R T B2 ψ E = A2 ( ) 2 2 2 2 2 δt t δr r OUTt r OUTt r + r = r2 = t = t2 = r = t = 2 θt r srh P ( ) C2 D2 ψ 2 ( ) EMD ( ) E t + t SN ρ + + ρ π + r r t xit i i i xit OUT tr i= 2 hr 2 η i 2 r = = T (2) Here wt ( ) r, t2r A 2 2 t t2 r r2 srh P ( ) B2 ψ C2 D2 ψ 2 ( ) EMD ( ) = δ δ are weights of neurons, E tr + t r SN t Tt r = x it ρ i + + ρi πi xit + 2 θt i 2 2 i 2 r = h r η r = = T t are the neurons thresholds Here the is the number of DEs, z = OUT SN and SN t is introduced as a normalized sum, ie where w is the matrix of dimension ( ) Q i r tr t Q, if wixit i=, SN t = Q, if wixit = i= P, that matrix shows which DEs are used during rocessing of different queries n [2] we also comared the develoed TM-algorithm with other methods like [] to estimate an oortunities and advantages of TS over our earlier aroaches based on Hofield Networks or their combination with genetic algorithms (NN-GAalgorithm) [3] For comlex mock-u roblems we obtained the TM solutions with the quality higher than the quality of solutions received with hel of NN-GAalgorithm on average 8,7%, the quality of solutions received with hel of BBM on average 23,6% (refer to Fig ), and CPU time for LR comosition was on average 36% less that the same sent by the Hofield Network aroach So, our TM is able to roduce good solutions But nevertheless this algorithm is time consuming on highdimensional tasks, and therefore it is needed to construct a arallel TM-algorithm in order to validate our aroach on the high-dimensional tasks and increase the erformance Moreover, the arallel algorithm hels us to reveal the influence of the tabu arameters on the tasks solution rocess and to determine the deendency between the tabu arameters and characteristics of our roblem in order to obtain the better solutions faster

5 Fig The value of obective function on mock-u roblems solutions 4 A General Descrition of DTM Functioning The roosed arallel algorithm of TM exloits arallelization caabilities of the following rocedures: finding a neuron to change its state; changing the value of E( S i ) of neurons for using it on the next iteration; the calculation of energy function value; the calculation of values of auxiliary functions used in asiration criteria of TM; the transition from one local cycle to the other For the case of the homogeneous comutational arallel cluster with multile identical nodes the following general scheme of new arallel functionality is roosed The set of neurons of the whole TM is distributed among all nodes n +, if < n2 N rocessors according to the formula N =, where n =, n2 N mod P n, otherwise = P, N the number of neurons in the whole TM, =,( P ) the index of rocessor, P the number of rocessors The number of Tabu sub-machines (TsMs) is equal to the number of available rocessors So, one TsM is located on each rocessor and TsM with index consists of N neurons During the initialization stage neural characteristics are set to each neuron The scheme of DTM is deicted on Fig 2 The same figure shows how the weight matrix of each TsM W { wi ; i, N ;, N} = = = = = wi ; i = Nk +, Nk ; =, N is constructed from the weight matrix W = { w ; i, =, N } k= k= i of the whole TM When the otimal state of DTM is achieved, the results from all TsMs are united The roosition that the energy of the whole TM is additive on the energies of TsMs including in the DTM, ie P = + + + P = = E E E E E, is formulated and roofed by the authors but due to lack of the sace is omitted in that article

6 Fig 2 DTM scheme (u) and method of W construction from the whole matrix W (bottom) Let s consider a common imlementation of DTM taking into account a arallel imlementation of foregoing rocedures nitialization At this stage we assume that TsMs included into DTM are constructed and initialized Construction and initialization are conducted following the mentioned above scheme of distribution of DTM neurons among the set of available rocessors After the structure of each TsM is defined, TsMs are rovided with the following characteristics: the matrix of neurons weights, vector of neurons thresholds, and vector of neurons biases Thus, on the current stage we have the set of TsMs, and the elements of this set are

7 {,,, },, ( ) subtm = W T n = P, (3) where subtm -th TsM, W the matrix of its neurons weights, the vector of neurons biases, T the vector of neurons thresholds, and initial states of TsM s neurons Matrixes according to the following formulas: W and vectors n the vector of and T are defined { wi ; i =, N ; =, N} { wi ; i = N +, N + N ; =, N} { wi ; i =, N; =, N} W W { } { wi ; i =, N; =, N } W = wi ; i, =, N = = = P 2 P W P P { wi ; i, NP ;, N = = } wi ; i = Nk +, Nk ; =, N k= k= { ;, } { i ;, N} { i ; =, N} { i ; =, N } { i ; = N +, N + N } = = i = N = = = { ;, } P 2 P P P { i ;, NP } = i ; = Nk +, N k k = k = { t ;, N} { t ; =, N} { t ; =, N } { t ; = N +, N + N } = T T T = t = N = = = P 2 P T P P { t ;, NP } = t ; = Nk +, Nk k = k = (4) (5) (6) Vector n of initial states of the whole TM neurons is random generated, and then cut on P arts, each of which (ie n ) is corresonded to the concrete TsM The local cycle of the TM Let s consider the local cycle of DTM Choose the neuron-candidate for the next move At the first ste of the TM local cycle we search for neuron on each TsM, which should change its state on current iteration The criterion to choose such a neuron is defined as the following: { { } } E ( S ) = min E ( Si) i =, N : k t l E ( S) + E ( S ) < E ( S ) =, ( P ) (7)

8 Thus, the search of neurons satisfied to the condition (7) is erformed in arallel on the hosts of CN The comarison of found neurons After the neuron satisfied to the condition (7) is found on each host, the search with hel of STMP reduce oerations defined by authors for MP_Allreduce function is erformed within the whole DTM to find the neuron, such that E( S ) = min E ( S ) =, ( P ) { } Change the energy value of neurons After the required neuron has been found, and each TsM has information about it, each neuron of subtm, =, ( P ) changes its E( S i ) value The calculation of DTM energy function change is done in arallel on each subtm Further the cycle is reeated following described scheme until the condition of exit from the local cycle of the TM is satisfied The global cycle of the TM We select neuron, that didn t change its state longest, on each TsMs The number of this neuron on each subtm is defined according to the following criteria: The search of ( ) ( t ) min { ti i, N },, ( P ) = = = (8) t is done on the available rocessors in arallel according to the formula (8) The comarison of found neurons After the neuron satisfied to the condition (8) is found on each host, the search with hel of LTMP reduce oerations defined by authors for MP_Allreduce function is erformed within the whole DTM to find the neuron { }, such that t min ( t ), ( P ) = = Change the energy value of neurons After the required neuron has been found, and each TsM has information about it, each neuron of subtm, =, ( P ) changes its E( S i ) value The calculation of DTM energy function change is done in arallel on each subtm Further the cycle is reeated following described scheme until the number of LTMP calls will exceed C : C Z +, C times After that the search is stoed and the best found state is taken as the final DTM state 5 The Algorithm of DTM Functioning Let s try to reresent the general descrition as an algorithm outlined ste by ste We will use the following notations: N the number of neurons in the DTM, ie S = S = S = N ; N the number of neurons including into the TsM subtm, where =, ( P ) ; P the number of rocessors on which DTM oerates

9 Ste Construct TsMs subtm and randomly initialize initial states of its neurons Define the tabu-size l of DTM Let h =, k = counters of iterations in the frame of the whole DTM Let c = and C the maximum number of LTMP calls in the frames of the whole DTM Let β > is defined according to inequality β N > l in the frames of the whole DTM too Ste 2 Find the local minimum energy state S Calculate E( S ) and E( Si ), i =, N E( S) E ( Si ), i = N +, N + N E( S2) E( S) = =, i =, N P 2 P E( SN ) EP ( Si ), i = Nk +, N k k= k= (9) The values of E ( S ) and E ( Si) for =, ( P ) are calculated in arallel on P rocessors Let S = S is the best global state, and E( S) = E( S) is the global minimum of energy Let S = S and E( S) = E( S) Let ti =, i =, N Ste 3 n the frames of each subtm choose the neuron with E ( S ) satisfied to E ( S ) { min { E ( Si ) i, N } : k t l E ( S) E ( S ) E ( S) },, ( P ) = = + < = Ste 4 Using STMP reduce oerations defined by authors, form the set {, E( S ), } s, where the index of neuron (in the frames of the whole DTM) changing its state at the current moment, E( S ) the change of DTM energy function value after the neuron has changed its state, s the new state of neuron Ste 5 f subtm contains the neuron, then t = k, s = s Ste 6 Let t = k, k = k +, h = h +, S = S, E( S) = E( S) + E( S ) in the frames of the whole DTM Ste 7 Udate E( S) using (9) The values of E ( S ) are calculated in arallel on P rocessors Ste 8 Determine if the new state S is the new local and / or global minimum energy state: if E( S) < E( S), then S = S, E( S) = E( S) and h = ; if E( S) < E( S), then S = S and E( S) = E( S) in the frames of the whole DTM Ste 9 f h < β N, go to Ste 3, else to Ste Ste f c C, then the algorithm stos S is the best state Else, in the frames of each subtm choose in arallel the neuron with ( ) i t satisfied to ( t ) = min{ t =, }, =, ( ) Using LTMP reduce oerations defined by authors, form the set i N P { }, E( S ), s, where the index of neuron (in the frames of the whole DTM) i

changing its state at the current moment, E( S ) the change of DTM energy function value after the neuron has changed its state, s the new state of neuron S Let = S and E( S ) = E( S) + E( S ), c = c + and h = Go to Ste 6 t s worth mentioning that on the Ste the new state of local energy minimum E( S ) is set without any auxiliary checks, ie is can be worse than the revious S Exloiting this technique we exclude stabilization in local energy minimums and exand areas of otential solutions 6 Performance Evaluation n order to evaluate the erformance of constructed DTM the set of exeriments on mock-u roblems with DTM consisting of N =, N = 4 and N = 6 neurons were done on multi-core cluster 372 trial solutions were obtained for each mock-u roblem deending on the values of < l, C, β > arameters of DTM We roosed to use an average acceleration as the metric to evaluate the efficiency of DTM The deendencies of average acceleration on the number of rocessors for mock-u roblems with N =, N = 4 and N = 6 are deicted on Fig 3 DTM gives a linear acceleration Fig 3 Average acceleration on mock-u roblems

7 Conclusion n this aer we roosed arallel TS algorithm for DDB OLS synthesis roblem The constructed DTM was validated and comared with the sequential TM As exected, both aroaches give the same results with the solutions quality higher than the quality of solutions received by NN-GA-algorithm [], [3] on average 8,7% and by BBM [8] on average 23,6% on mock-u roblem with higher dimension t is worth mentioning that during the DTM cycles intensive data communication between rocessors is carried out in the roosed algorithm Therefore, we can seak about the significant increasing of DTM erformance in comare with its consecutive analogue for the high-dimensional roblems This statement is not contrary to our obectives, because the roblem of DDB OLS synthesis is imortant today in view of high dimensionality References Babkin E, Karunina M Comarative study of the Tabu machine and Hofield networks for discrete otimization roblems nformation Technologies 28 Proc Of the 4 th nternational Conference on nformation and Software Technologies, T 28 Kaunas, Lithuania, Aril 24-25 SSN 229-2 25-4 (28) 2 Babkin E, Karunina M The analysis of tabu machine arameters alied to discrete otimization roblems // Proceedings of 29 ACS/EEE nternational Conference on Comuter Systems and Alications, ACCSA 29 May -3, 29 Rabat, Morocco P53-6 Sonsored by EEE Comuter Society, Arab Comuter Society, and EM, Morocco (29) EEE Catalog Number: CFP9283-CDR SBN: 978--4244-386-8 Library of Congress: 29928 htt://wwwcongresouses/aiccsa29 3 Babkin E, Petrova M Alication of genetic algorithms to increase an overall erformance of artificial neural networks in the domain of synthesis DDBs otimal structures Proc Of The 5th nternational Conference on Persectives in Business nformatics Research (BR 26) October 6-7, 26 Kaunas University of Technology, Lithuania SSN: 392-24X nformation Techonology and Control, Vol35, No 3A 285-294 (26) 4 Chakraani J, Skorin-Kaov J Massively arallel tabu search for the quadratic assignment roblem, Annals of Oerations Research 4 327-34 (993) 5 Fiechter C-N A arallel tabu search algorithm for large traveling salesman roblems Discrete Alied Mathematics Vol 5 ELSEVER 243-267 (994) 6 Garcia B-L et al A arallel imlementation of the tabu search heuristic for vehicle routing roblems with time window constraints, Comuters Os Res, Vol2 No 9 25-33, (994) 7 Kant K, Mohaatra P nternet Data Centers Comuter, Published by the EEE Comuter Society 8-962/4 (24) 8 Kulba VV, Kovalevskiy SS, Kosyachenko SА, Sirotyuck VО Theoretical backgrounds of designing otimum structures of the distributed databases M: SNTEG (999) 9 Porto Stella C S, Kitaima Joao Paulo F W, Ribeiro Celso C Performance evaluation of a arallel tabu search task scheduling algorithm Parallel Comuting Vol 26 ELSEVER 73-9 (2) Sun M, Nemati H R Tabu Machine: A New Neural Network Solution Aroach for Combinatorial Otimization Problems, Journal of Heuristics, 9:5-27, (23)