Fast error estimates for indirect measurements: applications to pavement engineering

Similar documents
A New Cauchy-Based Black-Box Technique for Uncertainty in Risk Analysis

Combining Interval, Probabilistic, and Fuzzy Uncertainty: Foundations, Algorithms, Challenges An Overview

Interval Computations as an Important Part of Granular Computing: An Introduction

When Can We Reduce Multi-Variable Range Estimation Problem to Two Fewer-Variable Problems?

Checking Monotonicity is NP-Hard Even for Cubic Polynomials

Artificial Neural Network Models For Assessing Remaining Life of Flexible Pavements

Evaluation of In-Situ Resilient Modulus Testing Techniques

How to Measure Loss of Privacy

Only Intervals Preserve the Invertibility of Arithmetic Operations

How to Define Relative Approximation Error of an Interval Estimate: A Proposal

Towards Decision Making under Interval Uncertainty

Exact Bounds on Sample Variance of Interval Data

Computational Aspects of Aggregation in Biological Systems

Towards Fast and Reliable Localization of an Underwater Object: An Interval Approach

Combining Interval and Probabilistic Uncertainty in Engineering Applications

A FINITE DIFFERENCE DOMAIN DECOMPOSITION ALGORITHM FOR NUMERICAL SOLUTION OF THE HEAT EQUATION

ONLY PARTICLES WITH SPIN 2 ARE MEDIATORS FOR FUNDAMENTAL FORCES: WHY?

Artificial Neural Network-Based Methodologies for Rational Assessment of Remaining Life of Existing Pavements. Research Project

Development of a Quick Reliability Method for Mechanistic-Empirical Asphalt Pavement Design

How to Explain Usefulness of Different Results when Teaching Calculus: Example of the Mean Value Theorem

Nondestructive Testing of Pavements and Backcalculation of Moduli

Exponential Disutility Functions in Transportation Problems: A New Theoretical Justification

Comparison of Backcalculated Moduli from Falling Weight Deflectometer and Truck Loading

Comparison between Interval and Fuzzy Load Flow Methods Considering Uncertainty

v = w if the same length and the same direction Given v, we have the negative v. We denote the length of v by v.

NOTES ON HIGHER-DIMENSIONAL TARAI FUNCTIONS. 0. Introduction I. Takeuchi defined the following recursive function, called the tarai function,

Conditional confidence interval procedures for the location and scale parameters of the Cauchy and logistic distributions

March 25, 2010 CHAPTER 2: LIMITS AND CONTINUITY OF FUNCTIONS IN EUCLIDEAN SPACE

Mechanistic Pavement Design

A L A BA M A L A W R E V IE W

Why Locating Local Optima Is Sometimes More Complicated Than Locating Global Ones

Homework 10 M 373K by Mark Lindberg (mal4549)

Use of Grothendieck s Inequality in Interval Computations: Quadratic Terms are Estimated Accurately (Modulo a Constant Factor)

ALACPA-ICAO Seminar on PMS. Lima Peru, November 2003

Dynamics of finite linear cellular automata over Z N

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Math 2433 Notes Week The Dot Product. The angle between two vectors is found with this formula: cosθ = a b

(308 ) EXAMPLES. 1. FIND the quotient and remainder when. II. 1. Find a root of the equation x* = +J Find a root of the equation x 6 = ^ - 1.

2 Discrete Dynamical Systems (DDS)

Structural Reliability

Key words: MaxEnt, expert systems, uncertainty, commonsense reasoning, intelligent control

εx 2 + x 1 = 0. (2) Suppose we try a regular perturbation expansion on it. Setting ε = 0 gives x 1 = 0,

1 Random and systematic errors

How to Distinguish True Dependence from Varying Independence?

1. Z-transform: Initial value theorem for causal signal. = u(0) + u(1)z 1 + u(2)z 2 +

SYMMETRY AND SPECIALIZABILITY IN THE CONTINUED FRACTION EXPANSIONS OF SOME INFINITE PRODUCTS

How to Estimate Expected Shortfall When Probabilities Are Known with Interval or Fuzzy Uncertainty

Computing Standard-Deviation-to-Mean and Variance-to-Mean Ratios under Interval Uncertainty is NP-Hard

SOME IDENTITIES INVOLVING THE FIBONACCI POLYNOMIALS*

Optimal Choice of Granularity. SRI International. Menlo Park, CA, USA. University of Texas at El Paso. Abstract

Pure Quantum States Are Fundamental, Mixtures (Composite States) Are Mathematical Constructions: An Argument Using Algorithmic Information Theory

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Street, Columbus, OH, 43223, USA c School of Mathematical Science and Computing Technology, Central South University,

Introduction to Data Analysis

Pade Approximations and the Transcendence

Three-dimensional thermo-mechanical analysis of layered elasticlplastic solids

Grilled it ems are prepared over real mesquit e wood CREATE A COMBO STEAKS. Onion Brewski Sirloin * Our signature USDA Choice 12 oz. Sirloin.

21 Symmetric and skew-symmetric matrices

CSCI-6971 Lecture Notes: Monte Carlo integration

NATIONAL UNIVERSITY OF SINGAPORE PC5215 NUMERICAL RECIPES WITH APPLICATIONS. (Semester I: AY ) Time Allowed: 2 Hours

TWO METHODS FOR OF EQUATIONS

ESTIMATING STATISTICAL CHARACTERISTICS UNDER INTERVAL UNCERTAINTY AND CONSTRAINTS: MEAN, VARIANCE, COVARIANCE, AND CORRELATION ALI JALAL-KAMALI

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

How to Estimate, Take Into Account, and Improve Travel Time Reliability in Transportation Networks

Analysis of Error Produced by Truncated SVD and Tikhonov Regularization Met hods *

Two-Point Boundary Value Problem and Optimal Feedback Control based on Differential Algebra

EXISTENCE THEOREMS FOR SOLUTIONS OF DIFFERENTIAL EQUATIONS OF NON-INTEGRAL ORDER*

Input: A set (x i -yy i ) data. Output: Function value at arbitrary point x. What for x = 1.2?

Newton's forward interpolation formula

176 5 t h Fl oo r. 337 P o ly me r Ma te ri al s

Euler s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability.

Increased Climate Variability Is More Visible Than Global Warming: A General System-Theory Explanation

A SYSTEM OF AXIOMATIC SET THEORY PART VI 62

On the Sum of the Squares

Numerical Integration of Variational Equations

Evaluation of mechanical characteristics by deflection measurement on rigid or composite pavement.

Unconstrained minimization of smooth functions

A Gentle Introduction to Gradient Boosting. Cheng Li College of Computer and Information Science Northeastern University

AN EFFECTIVE TOOL FOR ENHANCING THE STATIC BACKCALCULATION OF PAVEMENT MODULI

Propagation of Probabilistic Uncertainty: The Simplest Case (A Brief Pedagogical Introduction)

this chapter, we introduce some of the most basic techniques for proving inequalities. Naturally, we have two ways to compare two quantities:

Statistics for scientists and engineers

6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )

Numerical Differential Equations: IVP

INNER DERIVATIONS OF NON-ASSOCIATIVE ALGEBRAS

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

EXTREME OF RANDOM FIELD OVER RECTANGLE WITH APPLICATION TO CONCRETE RUPTURE STRESSES

The purpose of computing is insight, not numbers. Richard Wesley Hamming

SOME NEW FINITE ELEMENT METHODS FOR THE ANALYSIS OF SHOCK AND ACCELERATION WAVES IN NONLINEAR MATERIALS. J. T. ODEN The University of Texas at Austin

A Reservoir Sampling Algorithm with Adaptive Estimation of Conditional Expectation

Stability and Robustness of Weak Orthogonal Matching Pursuits

Higher order derivatives of the inverse function

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

EFFICIENT COMPUTATION OF PROBABILITIES OF EVENTS DESCRIBED BY ORDER STATISTICS AND APPLICATION TO A PROBLEM OFQUEUES

BRANCHING PROCESSES AND THEIR APPLICATIONS: Lecture 15: Crump-Mode-Jagers processes and queueing systems with processor sharing

7. Authors 8. Performing Organization L. Ke, S. Nazarian, I. Abdallah, and D. Yuan Report No. Research Report

Uncertainty modelling using software FReET

Math 234. What you should know on day one. August 28, You should be able to use general principles like. x = cos t, y = sin t, 0 t π.

, B = (b) Use induction to prove that, for all positive integers n, f(n) is divisible by 4. (a) Use differentiation to find f (x).

THE NULLSPACE OF A: SOLVING AX = 0 3.2

Transcription:

Reliable Computing 2 (3) (1996), pp. 219-228 Fast error estimates for indirect measurements: applications to pavement engineering CARLOS FERREGLrr, SOHEIL NAZARIAN, KRISHNAMOHAN VENNALAGANTI, CHING-CHUAN CHANG, and VLADIK KREINOVICH In ninny real-life situations, we have the fi~llowing problen : we want it) know the vahle ~)f ~)n e characteristic U that is diffiodt to measure directly (e.g., lifetin e of a pavement, effidency of an engine, etc.). To esti mate U, we mug know the relationship between 9 and aane directly measurable physical quamities Xh..., xn. Fro n this relationship, we extract an algorithm f that alh ws us, given xi, to cmnpute p: 9 ---- f(xl...,xa). &~, we measure zi, apply an algorithm f, and get the d~sired estimate. Existing algorithms fi~r err.r ~timate (interval nmthematics, Monte-Carlo methods, nun eri~l differentiadon, etc.} require cmnputation dine that is several times larger than the time necessary to compute Y = f(xh...,xa). So, if an algorithm f is already time-consuming, error estimates will take t(x~ hmg. In,many cases, this algorithm f consists of two parts: first, we use xi to determine the parameters zk of a numel that describes the measured object, and seomd, we use these parameters to estimate Y. The most tin e-consunfing part is finding zk; this is done by ~dving a system of ram-linear equatkms; usually least squares method is used. We show that for such f, one can estimate errors repeating this time~msu ning part of f only once. So, we can omlpute xgh y and an em)r estimate fi,r U with practically m~ increase in total con putadon dine. As an example of this m~htgtology, we give pavement lifetime estimates. BbICTp0e 0KeHHBaHHe II0rpeTTTH0CTel HeIIp MbIX If3MepeHI4 X: zp a0meh m K np0ekt p0bahmo MOCTOBbIX K. (I)~ewr, C. Ha3Aem, K. B~r~rm, tt.-h. U_xsr, B. K~m~oe., Ha HpaKT IKe qacto Bci'peq~c~rc~ C/le~yloliI;l~q 3a2L.~tla" T~,~yeTCH Ollpe~tC21HTb 3Haqen le HeKoel4 xapaktc~p}lcrhqecgt~fl IIepeMeHHO~I y, gotopyk~ Tpy~mt 13MCpHTI~ HatlpI;IMy~} (HallpHMep, C~)K ~.H3HI! M(K~rl)I~)~I, 3(~)(I~fKTHBHOCTb /IBUraTe21H H T.IL). LXTth~hl OlleHllTb Be3lHtlilHy ~ Mhl 21()JI~.Hbl 3HaTh C(N)THOIIieHHe Me~K- ty ~ H HegoTopblMl4 Cl3H3tltleCK MH Be~H~IHHaMH Xt,..., Xrt ~ goropue lih~'~'iriotc~i IIpHMOMy n3mepchhsl Ha ~OBaHItH ~JTOI'O Ct~)THOmCHH,~I CTpOHTOI a.'lropntm f, KOTt pblft IIO3BO.rLqcT HCXO2I,vl R3 3~UIaHHhiX X i at, zqttcau'rh y: Y = f(zl,..-,xn)- Fhw..ae yron) Cg:TaHerc.q mlmb lt3mephth Xi, ltpttmehutb ajiix~pi TM f a nojly~hrb "rpe~yemylo OtleHxy. CylIIeCTByR)IUHe a.~lroplttmm OL~eHHBaHILq il WpelitH lcrel~ (HHTepB3~bila~l MaTeMaTHga, MeTOJIM MoHTe-Kapao, qit~le~m~e ~BI~C[~)e~uMpoI~tHHe H X. It.) Tpedyg~T 3aTpaT epemem, e HCCKO]IbKO pa3 (~ C. Ferregut, S. Nazarian, K. Vennatagami, C.-C. Chang, V. Krelnovich, 1996 This work was snpported by NSF grants No. CDA-9015006 and IRI-92-11-662, NASA R~earch Grant No. 9-482, and a grant from the Materials Research Institute. We want u~ thank the United S~tes Army ~rps tff Engineers, Waterways Experiment Station (USAE, WES), Vicksburg, MS, fiw supplying us with the Layered Elastic Programs, WELSEA and WESDEF. We are alu, gready thankful u, all the participants of the tmernatkmal (~mference on Nntactical Analysis with Automatic Resuk Verificati~m, Lafayette, LA, Fehntary--March 1993. especially to Drs. S. Marktw and W, Walster. fi~r vahxable di~ussions.

220 C, FERREGUTt S. NAZARIAN~ K. VENNALAGANTI# C.'C. CHANGt V. KREINOVICH.pe~mtua.~mlix 're, gtrropme Hya~,~ ltaa nu~,caemta y = f(zt,...,zn), lqo:)xomy, ec~'tu a;iropwrm f, II ~3 TOI*II pa&ltaet,~e~enm), (lllehttli;ihlil: Illll~llll411CTelTI Millet 3~lH$,ITb 14ellpil~MJl~Mll ~llljlbllllll~ BpCMIi. B~) MHOn X cayqam~ adilxiptltm f Ci)CTImT 113 /IByX wacre~: C[-latla.~i Ha (~HtlBe X i I)IIpe2I~/IHIOTCH IlapaMeTphl Zk MllJle.llll~ tillhcbllkqlolllfii il3mepllf~mbll~l tki~lbl~kt, a 3a'reM yril liapa.metpbi licliojib3yaitcll IUlll I)lleI-[Ktl ~, HIII~I(IJlhlIIIIX BpeMeHHIMX 3aTI:I61T TpefiyeT iio$1yttehne 3HiI~IeHHfl Zk, /l.~l tlel'o lt~(~xiuihmo l)euliltl, cilc'remy HeJIiIHefIHIdX ypaehemlh (KaK lipal~ll:lo, llpltmctl~etca Me'ro~l HaIIM~*'HI,IIIIIX KBaapaToB). nolli3atl l, qto /Idllt IIOliO~Hb/X aalx)pilrm~m f MOZHO IIOJlyqilTb tllietlktl llol~lllitocreltl, II~)il3B~II~t 3Ty TpyJlilMgyal qacrb tll,lqht~ifhtliyi Tit,lbl(l 11211114 p;13. ~l'alily 1~2111l, ibl iit~kei BbltltlG/llllb gag ~, Tag tl lllleltky IIlIrpelllilllCrll ILIH ~ IIpRKTIIqCCKil ~3 ybejltlqehilil i~lll[erl) BpeMelltl BblMiI~IeHtlI71. B KilqecrB~ npit.~epa ilcilli)ib3iii~hlih 3TOI~! Mf~TOJIIIKtl ilpil~ljl~rc~ IIIIeHKtl Cp41Kil ~KII3HIi MIICTOBIII~I. li Formulation of a general problem: error estimation for indirect measurements Indirect measurements: a real-llfe situation. In this paper, we will consider the following engineering problem: We want: to know the value of some physical quantity y. Problem: It is difficult (or even impossible) to measure y directly (examples of such quantities are lifetime of a pavement, efficiency of an engine, etc.). Solution: To estimate y, we must know the relationship between y and some directly measurable physical quantities zx,...,x~. From this relationship, we extract an algorithm f that allows us, given zi, to compute y: y = f(zl,...,x~). So, we measure zi, apply an algorithm f to the measurement results ~i, and get the desired estimate ~ = f(~l, ~2..., ~n). This procedure is called an ~lirea me~ur~m~t of y. Warning: This f is rarely an explicit analytical expression; usually, it is a complicated algorithm. Remaining problem: What is the precision of the resulting value 0? What do we know about the errors of measuring xi? The error of an indirect measurement is caused by the errors with which we know zi. So, to estimate the precision of y, let us find out what we know about these precisions. For each measuring device, its supplier must provide some information about its precision (else, if no precision is guaranteed, this measuring device is of no use). There are two main ways to describe such a precision (see, e.g., [4]): 1) In many cases, we know only a characteristic of the tam error Axi = a:i - xi. Namely, the supplier provides a number Ai such that the error never exceeds Ai (IAxil _< Ai); in other words, the actual value zi belongs to an interval [~:i -E, a:i + z]. This number A i can depend on a:i. For example, a supplier can provide us with a relative error 6i, meaning that taxd < (so, in this case, Ai = 8ia:i)- 2) Measurement error AX is random. Therefore, we can represent it as a sum of two components: the mathematical expectation E(Ax) that is called a systemat/c error (denoted by Axe), and the difference Ax -- E(Ax) that is called a random error (and denoted by Ax[). For some measuring devices, we know the characteristics of the both components. Namely, the supplier provides a value A~ such that IAz$1 _< zx~, and a value ai such that the standard deviation ~'(Axi) does not exceed cri.

FAST ERROR ESTIMATES FOR INDIRECT MEAS~EIvlF.N'I~... 221 These two characteristics can also depend on xi. For example, they can be given as relative errors 6~ and s~. In this case, A~ = ~i6~ and ai = ~isi. Systematie errors are usually negligible. To improve the quality of a measuring device, it makes sense to calibrate it first. By this we mean that before we release this device, we compare its restllts with the results of a more precise device, find an average error, and compensate for it. For calibrated devices, the systematic error is negligible, so we can assume that the total error coincides with its random component; After the calibration, we also have enough data to estimate the standard deviation a(xi). So, we can assume that a(xi) is known. In view of this, in this paper, we will consider only two cases: 1) we known an interval for an error; 2) the error has 0 average, we know its standard deviation o'(xi), and the errors of different measurements are independent random variables. 21 How are errors estimated now? The main methodologies for estimating the error of y are: numerical differentiation, Monte- Carlo method (for detailed description of these methods, see, e.g., [1, 2, 5, 13-15]), and interval mathematics (see, e.g., [11]). Let us briefly describe these methods. Numerical differentiation. The error All = 0 - Y can be expressed as the difference f(:cl,-..,~:n) - f(xh..-,xn) = f(:~x,.--,xn) -- f(xx -- AXh...,Xn -- Axn). (1) Usually, the errors Axi are relatively small, so we can neglect the terms that are quadratic in errors. If we neglect quadratic (and higher order terms) in the Taylor expansion of the r.h.s. of (1), we can conclude that i=l OXi If we know that IAz, l < A,, then we can conclude that IAyl _< A, where A = Of Ai (3) If we know cr(zi), then we can compute a(y) as a(y) = ~ t-ff~xi ) ~r txi). (4) i=l Since ] is usually given as an algorithm, and not as an analytical expression, we must somehow compute the partial derivatives. We can do that by applying formula (2) to the case when All = h for some i and 0 for all other i (here, h is some small number). As a restilt, we get the following estimate: Of f(x'l''' "Xi-li~Ci'Xi+l''' "~Cn) - f(xl" ""xi-l':ci - h'xi+l"" '"xn) (5) Oxi h This is a standard formula for numerical differentiation.

222 c. FERREGUTp S. NAZARIAN t K. VENNALAGANTIt C.-C. CHANGs V. KREINOVICH So, in order to get an estimate for Ay, we must compute the partial derivatives using (5), and then estimate A or a(y) using (3) or (4). Monte-Chrlo methods. A standard Monte-Carlo method (see, e.g., [13]) is used to estimate cr(y). This method is based on the following idea: First, we choose the number of iterations N. Then, for k = 1... N, we substitute into f the values xi + 37, where 37 are computersimulated normally distributed random numbers with 0 average and standard deviation a(z~). As a result, we get N values yl..., yn. Because of (2), the distribution of yk _ y is Gaussian, with 0 average and standard deviation a(y). So, we can estimate a(y) as X/~(y ~''- y)~/n. This method gives an approximate value of (y). In real life, it is sufficient to know a(y) with ~ 20% precision (practically no one distinguishes between the cases when the precision is, say, 0.01, or 0.012, which is 20% more). To get such precision, we need N ~ 25 iterations. A Monte-Carlo style method to compute A was proposed in [7] (see also [6]). This method uses N ~ 50 iterations to guarantee the desired 20% precision. Interval techniques. Standard techniques of interval mathematics require that we decompose the algorithm f into elementary operations, and then apply interval operations instead of usual ones. 3. What is wrong with the existing methods To use a numerical differentiation method, we must apply the algorithm f n + 1 times: once to get O, and then n times to estimate partial derivatives. For example, if we have 10 variables, we need 11 times more time to estimate the error than to compute y itself. If f is fast, there is no problem. But if f is already a time-consuming algorithm, this is no good. For Monte Carlo methods, we must apply f 25 or 50 times; so, the computation time for this method is 25-50 times longer than the time that is necessary to compute y. For interval methods: an operation with intervals consist of 2 or 4 operations with numbers. Therefore, by applying interval mathematics, we increase the computation time at least 2-4 times. In addition, the error estimates that we obtain this way are often "overshoots" (i.e., much bigger than the biggest possible errors). How to diminish this computation time? If we have several processors that can work in parallel, then we can use parallel algorithms for a speed-up (see, e.g., [6]). But what if we have only one processor? In this paper, we will describe a frequent real-life situation, when we can drastically decrease this computation time. 4. The new method of estimating errors To develop a new method, we will use the fact that in many real-life situations, the algorithm f is of a very specific type, and for such f, we can estimate the error in y really fast. To elaborate on that, we need to recall where f usually comes from. Where does f come from? We have mentioned that an algorithm f comes from the known relationship between xi and y. Usually, such a relationship exists in a form of a model of the physical phenomenon. By a model, we mean that we have (relatively simple) formulas

FAST ERROR ESTIMATES FOR INDIRECT MF.,ASURF.aMENTS... 223 or algorithms that, given some parameters, allow us to compute all physical characteristics, including azi and?/. Some of these parameters can be known from the previous experiments (we will denote them by t~,...,tq). Some of these parameters are specific to this particular situation (we will denote them by zl,...,zp). These paremeters must be determined from the known measurement results zl,.., xn. SO, an algorithm that estimates?~ from :~i, consists of two stages: first, we use the measurement results ~i to get estimates ~ model, and for the parameters of the second, we use the estimates z'k and {~ to estimate y. Let us denote by Fi and F' functions that describe the model, i.e., that express :ri and ~/ in terms of zk and ~l: zi = Fi(zl,...,z v,tl,...,tq) and y = F(zh...,zp, tt,...,tq). Then, to estimate z~, we must solve the system of equations zl = F~(zl,..., zp, tl..., ~q), 1 < i < n. Since we know only approximate values of z~, we will get approximate values of zk as well. To improve the precision, we can perform additional measurements. In this case, we will get an o~er-determined system (with more equations than unknowns). Usually, such systems are solved by the least:squares method, i.e., from the condition that i--~ 1 z~,...,zp Example. For example, in celestial mechanics, there are rather simple formulas that describe a trajectory of any celestial body in a Solar system (e.g., a comet). The parameters of this trajectory include the position ti of the Sun (that is well known from other measurements), and parameters zk of the orbit relative to the Sun (that have to be determined from the observations). So, if we want to predict a future position y from several observations Y:i, we must find the parameters zk of a trajectory from zi, and then predict y. What is the most time-consuming part of this algorithm? If Fi are linear, we can easily solve a system of linear equations and find zk. The problem becomes computationally complicated when the functions Fi are highly non-linear. In this case, no matter what method we use to compute zk (Newton's method, other optimization techniques, etc.), all these methods consist of several iterations: we find a first guess Z, substitute these values into F~, then adjust the values of zk, etc. In other words, the first "back-calculation" stage (estimating zk from :ri) includes several "forward" steps (computing z~ from zk). Therefore, the computation time for the first stage is much longer than the time for the second stage. So, the most time-consuming part of an algorithm f is estimaing zk from ~.~. Our idea. The reason why existing methods take so tong is that they involve several applications of f and, therefore, several repetitions of a time-consuming "back-calculation" step: for the initial values ~ = (xl,..., f~n), to get the estimates Z'k, and then, for several other input values ~ that are close to.~i. Of course, we need to apply this back-calculation once, to get the values ~'k. But next time, when we want to compute zk for some close values zk, we do not have to repeat this complicated procedure of solving a system of non-linear equations: indeed, if zi are close to zl, then the corresponding zk will be close to ;~k. So, it is sufficient to find the small deviations Azk = zk - ;~k. We consider.the casewhen the errors are relatively small, so we can neglect

224 C. FERREGUT~ S. NAZARIANr K. VENNALAGANTI t C.-C. CHANG t V. KREINOVICH the terms that are quadratic in Azk, and therefore, instead of the original nonqinear system of equations Fi(z", ~ = xi, solve its linearized version. How to use this idea in ease of interval estimates. Since we assume that our model is accurate, the true values z~ and tt of the parameters must satisfy the equations F~(zx,..., zp, h,..., q) = z~. If we substitute zk = ~'k - Azk, tt = tt -- At,, and xi = Y:i - Axl into these equations, expand Fi into Taylor series, and retain only terms that are linear in Az~ and Art, we will conclude that k ai~azk = Azi - ~i - Y~ buatt (6) l where 6, = :~i - F/(~k, tl), From y = F(zl,..., zp, tl,..., tq), we can likewise conclude that OF~ 0F~ = b-td' = 0i;" (7) Ay = ckaz + (s) k where OF OF ck = oz--;' d, = Or--7" (9) The only thing we know about the values Axi, Azk and At/ is that they satisfy (6) and that the values of Axi and Art are limited by the corresponsing measurement errors. So, to find possible the interval of possible values of Ay, we must find the biggest and the smallest value of Ay under these conditions. Since these condiditons are Iinear in the unknowns, this optimization problem becomes a well-known linear programming problem, for which fast algorithms are well-known. So, we are ready to formulate an algorithm. Algorithm that estimates errors for the case when we know intervals. Given: 1) An algorithm ~ that estimates xi from zk and tl. 2) An algorithm F that estimates y from zk and ft. 3) An algorithm f that gives an estimate ~ for y from xi by first estimating zk and then estimating y. 4) The measured values xi, 1 < i < n and their precisions Ai (i.e., such numbers that iazd = t:~i -zd <_ Ad. 5) The measured values tl,..., tq, and their precisions A~. O~eaive: To find an estimate A for Ay = y -- 9 (i.e., a value A such that lay[ < A). Algorithm: 1) Apply f and get estimates 'k and 9.

FAST ERROR ESTIMATES FOR INDIRECT MEASUREMENTS... 2~,5 2) Use numerical differentiation to estimate the derivatives (7) and (9). 3) Estimate csi = Axi -- Fi(;~1..., z',, ~1,.--, t'q). 4) Find A+ as a soh,tions of the following linear programming problem with unknowns Axi, Azk, and Art: ~ ckazk + draft --+ max under the conditions (6), -Ai < Axi < Ai (1 <: i < n), and -A~ < At < A~(1 < l <_ q). 5) Find A- as a solution to a similar problem, but with min instead of max. 6) Compute A = max(a*, IA-]). Comment. In this algorithm, a time-consuming ~back-calculation" part is repeated only once. Therefore, for time-consuming f, this algorithm enables us to estimate Ay with practiaclly no increase in total computation time. The ease of random errors. In this case, for xi close to xl, we can also linearize the expression for Fi. Threfore, instead of a system of non-linear equations, we get a linearized system (6). After this system has been solved, we can use (8) to estimate Ay. We must estimate the standard deviation of y. Because of (8), ~r(y) 2 = E(Ay) 2 = ~ ceck, E(AykAyk,) + ~ ckdte(aykatz) + ~_, dtdve(atlatv). (10) So, to estimate a(y), we must estimate these mathematical expetations E(AykAyk,), etc. If we denote Xk = Azk and Xp+~ = Ate, then can say that we must know E(XkXI) for all k and l. To get these estimates, we can apply the standard methodology of (linear) least squares method (see, e.g., [8, 10]). We are interested in p+q variables Azk, t < k < p and Atl, 1 < l < q. For these unknowns, we have the following system of equations: ~ aikazk 4- ~, buatl ~ 0 with standard deviation a(xi), and Atl ~ 0 with standard deviation a(tl). In terms of Xt, we have n 4- q equations: ~ Ai~Xk ~ 0 with standard deviation al, where for i < n: A~k = aik for k < p, and Ai,p+t = bp~; for i > n, An I,l -- 1 and A~+hk = 0 for k # n + l; o'~ = ey(xi) for i < n, and aa+l = a(t;l). According to the least squares theory, the matrix E(XkXl) is an inverse to the matrix UkI = ~i A~kAua:( 2. So, we can compute U~:z, invert it, and use the resulting values of E(XkXI) to estimate a(y). Hence, we arrive at the following algorithm: Algorithm that estimates o'(y) when errors a(xi) are random. Given: 1) An algorithm F/ that estimates xi from zk and ~l. 2) An algorithm F that estimates y from zk and ft. 3) An algorithm f that gives an estimate ~ for y from Y:i by first estimating zk and then estimating y. 4) The measured values 5h, 1 < i < n and their standard deviations ~(x~). 5) The measured values tl,..., tq, and their standard deviations a(tt). O~jective: To find a standard deviation a(y). Algorithm: t) Apply f and get estimates ~,~ and Y.

226 c. FERREGUT, S. NAZARIAN, K. VENNALAGANTI, C.-C. CHANG, V. KREINOVICH 9.) Use numerical differentiation to esdmate the derivatives (7) and (9). 3) Compute Aik, cri, and Ukt (as above). 4) Invert the matrix Uta and get E(XkXI). 5) Substitute the resulting values of E(XkXt) into the formula (10) and thus compute rr(v ). Comments. 1. In this algorithm, a time-consuming %ack-calculation" part is also repeated only once. Therefore, for time-consuming f, we can estimate ~ and o'(y) with practically no increase in total computation time. 2. If some parameters tt are already known with much better precision than our measurements, then we can simplify our computations by asuming that they are absolutely precise (Art = 0). In this case, it is sufficient to consider fewer than t9 + q variables, and thus, the computation will be even smaller. Another case when we can invert matrices of sizes smaller than (19 + q) (p + q) is when some of the parameters tt do not influence zi at all, and are used only to compute V. For such parameters tt, E((At~) 2) = e(tt) 2, and E(AttAzk) = 0 for all k. 5. Case study: pavement engineering The problem. One of the main problems of pavement engineering is to predict the remaining lifetime y of a pavement. It is impossible to measure y direcdy, so instead a Falling Weight Deflectometer (FWD) is used: we drop a mass on the pavement, and measure the resulting deviations zl,..., a:n in several (usually n = 7) locations on the surface of the pavement. To estimate V from if:i, a 3-layer pavement model is used [O, 16]. According to this model, the pavement consists of 3 layers: 1st asphaltic/concrete, 2nd base, 3rd subgrade layer. Each layer is characterized by 3 paremeters: its thickness hi, its Poisson ratio vi, and its elastic modulus El. Thickness hi does not change in time, so we can take hi from the documents that described the design of this pavement. The values vl may somewhat change. However, vi describe the horizontal strain, and the pavement is designed for vertical loads only. So, these changes do not influence the lifetime, and we can neglect them. So, in this problem, we have 7 parameters tt whose values are known from the previous measurements: hi, h2, ha, vl, v2, vz, and the weight w of the FWD mass. We have 3 parameters z~ that have to be determined from the measurements: El, Eu, Es. Corresponding algorithms F/ and F are given in [9, 16] (here, F is also a two-stage algorithm: given za and tt, we first compute tensile and compressive strains ~t and v, and then use these strains to predict y). How errors were estimated before. In this problem, all the sensors are calibrated, so we can assume that the errors are random. In [12, 17], a Monte-Carlo method was used. Computation time of the existing method. On a 886 IBM PC, forward computations (i.e., computing y and zi from the known parameters zk and tz) take about 20 seconds. A program f that estimates y from a:l,...,a:n takes about 3 minutes to run. The Monte-Carlo method required up to 25 repetitions of this program, so its total computation time is about 80 rain. If we take into consideration that we must measure the value of y for every mile, it is too long.

FAST ERROR ESTIMATES FOR INDIRECT MEASUREMENTS... 227 Our results, We applied the above algorithm to estimate cr(y) (for details, see [8]). The entire algorithm, including both computing 0 from Yzi, and estimating (y), ran for about 3.5 minutes. This time is close to the time (3 min) required to compute 1/from/:i. Conclusion With practically no increase in the computation time, we not only compute y, but we also estimate its precision. References [1] Beck, J. V. Parameter estimation in engineering and science. Wiley, 1977. [2] Breipohl, A. M. Probabilistic systems analysis. Wiley, 1970. [3]Chang, C.-C. Fast algorithm that estimates the precision of indirect measurements. Master Project, Computer Science Dept., University of Texas at Et Paso, 1992. [4] Fuller, W. A. Measurement error models. Wiley, 1987. [5] Handbook of engineering mathematics. American Society for Metals, t983. [6] Kreinovich, V., Bernat, A., et al. Parallel computers estimate errors caused by i,nprecise data. Interval Computations 1 (2) (1991), pp. 31-46. [7] ~[reinovich, V. and Pavlovich, M. I. Error estimate of the result of indirect measurements by using a calculational experiment. Measurement Techniques 28 (3) (1985), pp. 201-205. [8] Linnik, Yu. V. Method of least squares and principles of the theory of observations. Pergamon Press, 1961. [9] Lytton, R. L. Backcalcu/at/on of pavement layer properties. In: Bush IIt, A. J. and Baladi, G. Y. (eds) ~Nondestructive Testing of Pavements and Backcalculation of Moduli, ASTM STP I026", American Society for Testing and Materials, Philadelphia, 1986, pp. 7-38. [10] Mikhail, E. M. Obsew~ions and least squares. Thomas Y. Crowell, I976. [11] Moore, R. E. Methods and applications of interval analysis. SIAM, Philadelphia, 1979. [12] Rodriguez-Gomez, J., Ferregut, C., and Nazarian, S. Impact of variability in pavement parameters on backcalculated moduli. University of Texas at E1 Paso, Dept. of Civil Engineering, Techn. Rep., 1991. [13] Rubinstein, R. Y. Simulation and the Monte Carlo method. Wiley, 1981 [14] Scheaffer, R. L. and McClave, J. T. Probability and statistics for engineers. PWS-KENT, 1986. [I5] Sneddon. I. N. Encyclopaedic dictionary of mathematics for engineers and applied scientists. Pergamon Press, 1976. [16] Uzan, j. et al. A microcomputer based procedure for backcalculating layer raoduli from FWD data. Texas Transportation Institute Research Report 1123-1, 1988.

'~ C. FERREGUTt S. NAZARIANp K. VENNALAGANTII C.'C. CHANGp V. KREINOVICH [17] Vennalaganti, K. M., Ferregut, C., and Nazarian, S. Stochastic analysis of errors in remaining life due to misestimation of pavonent parameters in NDT. University fo Texas at El Paso, Civil Engineering Dept., Techn. Rep., 1992. Received: March 10, 1993 C. FERREGUT, S. N^ZARIAN, K. VENNALAGANTI Department of Civil Engineering University of Texas at El Paso El Paso, TX 79968 USA C.-C.CHANG, V. KREINOVICH Computer Science Department University of Texas at El Paso El Paso, TX 79968 USA C.-C.Cva~Nc, Ct~REr, rr ADD~rSS: 8 Kan Lo St., Taichung, Taiwan Republic of China