Approximative Methods for Monotone Systems of min-max-polynomial Equations

Size: px
Start display at page:

Download "Approximative Methods for Monotone Systems of min-max-polynomial Equations"

Transcription

1 Approximative Methods or Monotone Systems o min-max-polynomial Equations Javier Esparza, Thomas Gawlitza, Stean Kieer, and Helmut Seidl Institut ür Inormatik Technische Universität München, Germany {esparza,gawlitza,kieer,seidl}@in.tum.de Abstract. A monotone system o min-max-polynomial equations (min-max- MSPE) over the variables X 1,..., X n has or every i exactly one equation o the orm X i = i(x 1,..., X n) where each i(x 1,..., X n) is an expression built up rom polynomials with non-negative coeicients, minimum- and maximum-operators. The question o computing least solutions o min-max- MSPEs arises naturally in the analysis o recursive stochastic games [4, 5, 12]. Min-max-MSPEs generalize MSPEs or which convergence speed results o Newton s method are established in [9, 2]. We present the irst methods or approximatively computing least solutions o min-max-mspes which converge at least linearly. Whereas the irst one converges aster, a single step o the second method is cheaper. Furthermore, we compute ɛ-optimal positional strategies or the player who wants to maximize the outcome in a recursive stochastic game. 1 Introduction In this paper we study monotone systems o min-max polynomial equations (min-max- MSPEs). A min-max-mspe over the variables X 1,..., X n contains or every 1 i n exactly one equation o the orm X i = i (X 1,..., X n ) where every i (X 1,..., X n ) is an expression built up rom polynomials with non-negative coeicients, minimumand maximum-operators. An example o such an equation is X 1 = 3X 1 X 2 +5X 2 1 4X 2 (where is the minimum-operator). The variables range over non-negative reals. Minmax-MSPEs are called monotone because i is a monotone unction in all arguments. Min-max-MSPEs naturally appear in the study o two-player stochastic games and competitive Markov decision processes, in which, broadly speaking, the next move is decided by one o the two players or by tossing a coin, depending on the game s position (see e.g. [10, 6]). The min and max operators model the competition between the players. The product operator, which leads to non-linear equations, allows to deal with recursive stochastic games [4, 5], a class o games with an ininite number o positions, and having as special case extinction games, games in which players inluence with their actions the development o a population whose members reproduce and die, and the player s goals are to extinguish the population or keep it alive (see Section 4). Min-max-MSPEs generalize several other classes o equation systems. I product is disallowed, we obtain systems o min-max linear equations, which appear in classical This work was in part supported by the DFG project Algorithms or Sotware Model Checking.

2 two-person stochastic games with a inite number o game positions. The problem o solving these systems has been thoroughly studied [1, 7, 8]. I both min and max are disallowed, we obtain monotone systems o polynomial equations, which are central to the study o recursive Markov chains and probabilistic pushdown systems, and have been recently studied in [3, 9, 2]. I only one o min or max is disallowed, we obtain a class o systems corresponding to recursive Markov decision processes [4]. All these models have applications in the analysis o probabilistic programs with procedures [12]. In vector orm we denote a min-max-mspe by X = (X) where X denotes the vector (X 1,..., X n ) and denotes the vector ( 1,..., n ). By Kleene s theorem, i a min-max-mspe has a solution then it also has a least one, denoted by µ, which is also the relevant solution or the applications mentioned above. Kleene s theorem also ensures that the iterative process κ (0) = 0, κ (k+1) = (κ (k) ), k N, the so-called Kleene sequence, converges to µ. However, this procedure can converge very slowly: in the worst case, the number o accurate bits o the approximation grows with the logarithm o the number o iterations (c. [3]). Thus, the goal is to replace the unction by an operator G : R n R n such that the respective iterative process also converges to µ but aster. In [3, 9, 2] this problem was studied or min-max-mspes without the min and max operator. There, G was chosen as one step o the well-known Newton s method (c. or instance [11]). This means that, or a given approximate x (k), the next approximate x (k+1) = G(x (k) ) is determined by the unique solution o a linear equation system which is obtained rom the irst order Taylor approximation o at x (k). It was shown that this choice guarantees linear convergence, i.e., the number o accurate bits grows linearly in the number o iterations. Notice that when characterizing the convergence behavior the term linear does not reer to the size o. However, this technique no longer works or arbitrary min-max-mspes. I we approximate at x (k) through its irst order Taylor approximation at x (k) there is no guarantee that the next approximate still lies below the least solution, and the sequence o approximants may even diverge. For this reason, the PReMo tool [12] uses roundrobin iteration or min-max-mspes, an optimization o Kleene iteration. Unortunately, this technique also exhibits logarithmic convergence behavior in the worst case. In this paper we overcome the problem o Newton s method. Instead o approximating (at the current approximate x (k) ) by a linear unction, both o our methods approximate by a piecewise linear unction. In contrast to the applications o Newton s method in [3, 9, 2], this approximation may not have a unique ixpoint, but it has a least ixpoint which we use as the next approximate x (k+1) = G(x (k) ). Our irst method uses an approximation o at x (k) whose least ixpoint can be determined using the algorithm or systems o rational equations rom [8]. The approximation o at x (k) used by our second method allows to use linear programming to compute x (k+1). Our methods are the irst algorithms or approximatively computing µ which converge at least linearly, provided that is quadratic, an easily achievable normal orm. The rest o the paper is organized as ollows. In Section 2 we introduce basic concepts and state some important acts about min-max-mspes. A class o games which can be analyzed using our techniques is presented in Section 4. Our main contribution, the two approximation methods, is presented and analyzed in Sections 5 and 6. In Sec- 2

3 tion 7 we study the relation between our two approaches and compare them to previous work. We conclude in Section 8. Missing proos can be ound in a technical report [?]. 2 Notations, Basic Concepts and a Fundamental Theorem As usual, R and N denote the set o real and natural numbers. We assume 0 N. We write R 0 or the set o non-negative real numbers. We use bold letters or vectors, e.g. x R n. In particular 0 denotes the vector (0,..., 0). The transpose o a matrix or a vector is indicated by the superscript. We assume that the vector x R n has the components x 1,..., x n. Similarly, the i-th component o a unction : R n R m is denoted by i. As in [2], we say that x R n has i N valid bits o y R n i x j y j 2 i y j or j = 1,..., n. We identiy a linear unction rom R n to R m with its representation as a matrix rom R m n. The identity matrix is denoted by I. The Jacobian o a unction : R n R m at x R n is the matrix o all irst-order partial derivatives o at x, i.e., the m n-matrix with the entry i X j (x) in the i-th row and the j-th column. We denote it by (x). The partial order on R n is deined by setting x y i x i y i or all i = 1,..., n. We write x < y i x y and x y. The operators and are deined by x y := min{x, y} and x y := max{x, y} or x, y R. These operators are also extended component-wise to R n and point-wise to R n -valued unctions. A unction : D R n R m it called monotone on M D i (x) (y) or every x, y M with x y. Let X R n and : X X. A vector x X is called ixpoint o i x = (x). It is the least ixpoint o i y x or every ixpoint y X o. I it exists we denote the least ixpoint o by µ. We call easible i has some ixpoint x X. Let us ix a set X = {X 1,..., X n } o variables. We call a vector = ( 1,..., m ) o polynomials 1,..., m in the variables X 1,..., X n a system o polynomials. is called linear (resp. quadratic) i the degree o each i is at most 1 (resp. 2), i.e., every monomial contains at most one variable (resp. two variables). As usual, we identiy with its interpretation as a unction rom R n to R m. As in [9, 2] we call a monotone system o polynomials (MSP or short) i all coeicients are non-negative. Min-max-MSPs. Given polynomials 1,..., k we call 1 k a min-polynomial and 1 k a max-polynomial. A unction that is either a min- or a maxpolynomial is also called min-max-polynomial. We call = ( 1,..., n ) a system o min-polynomials i every component i is a min-polynomial. The deinition o systems o max-polynomials and systems o min-max-polynomials is analogous. A system o min-max-polynomials is called linear (resp. quadratic) i all occurring polynomials are linear (resp. quadratic). By introducing auxiliary variables every system o minmax-polymials can be transormed into a quadratic one in time linear in the size o the system (c. [9]). A system o min-max-polynomials where all coeicients are rom R n 0 is called a monotone system o min-max-polynomials (min-max-msp) or short. The terms min-msp and max-msp are deined analogously. Example 1. (x 1, x 2 ) = ( 1 2 x , x 1 2) is a quadratic min-max-msp. 3

4 A min-max-msp = ( 1,..., n ) can be considered as a mapping rom R n 0 to R n 0. The Kleene sequence (κ(k) ) k N is deined by κ (k) := k (0), k N. We have: Lemma 1. Let : R n 0 Rn 0 be a min-max-msp. Then: (1) is monotone and continuous on R n 0 ; and (2) I is easible (i.e., has some ixpoint), then has a least ixpoint µ and µ = lim k κ (k). Strategies. Assume that denotes a system o min-max-polynomials. A -strategy σ or is a unction that maps every max-polynomial i = i,1 i,ki occurring in to one o the i,j s and every min-polynomial i to i. We also write i σ or σ( i ). Accordingly, a -strategy π or is a unction that maps every min-polynomial i = i,1 i,k occurring in to one o the i,j s and every max-polynomial i to i. We denote the set o -strategies or by Σ and the set o -strategies or by Π. For s Σ Π, we write s or ( s 1,..., s n). We deine Π := {π Π π is easible}. We drop the subscript whenever it is clear rom the context. Example 2. Consider rom Example 1. Then π : 1 2 x , x 1 2 x 1 2 is a -strategy. The max-msp π is given by π (x 1, x 2 ) = (3, x 1 2). We collect some elementary acts concerning strategies. Lemma 2. Let be a easible min-max-msp. Then (1) µ σ µ or every σ Σ; (2) µ π µ or every π Π ; (3) µ π = µ or some π Π. In [4] the authors consider a subclass o recursive stochastic games or which they prove that a positional optimal strategy exists or the player who wants to maximize the outcome (Theorem 2). The outcome o such a game is the least ixpoint o some min-max-msp. In our setting, Theorem 2 o [4] implies that there exists a -strategy σ such that µ σ = µ provided that is derived rom such a recursive stochastic game. Example 4 shows that this property does not hold or arbitrary min-max-msps. Example 3. Consider rom Example 1. Let σ 1, σ 2 Σ be deined by σ 1 (x 1 2) = x 1 and σ 2 (x 1 2) = 2. Then µ σ1 = (1, 1), µ σ2 = ( 5 2, 2) and µ = (3, 3). The proo o the ollowing undamental result is inspired by the proo o Theorem 2 in [4]. Although the result looks very natural it is non-trivial to prove. Theorem 1. Let be a easible max-msp. Then µ σ = µ or some σ Σ..mine 3 An Example o Application: Extinction Games ======= 4

5 4 A Class o Applications: Extinction Games.r1222 In order to illustrate the interest o min-max-msps we consider extinction games, which are special stochastic games. Consider a world o n dierent species s 1,..., s n. Each species s i is controlled by one o two adversarial players. For each s i there is a non-empty set A i o actions. An action a A i replaces a single individual o species s i by other individuals speciied by the action a. The actions can be probabilistic. E.g., an action could transorm an adult rabbit to zero individuals with probability 0.2, to an adult rabbit with probability 0.3 and to an adult and a baby rabbit with probability 0.5. Another action could transorm an adult rabbit to a at rabbit. The max-player (min-player) wants to maximize (minimize) the probability that some initial population is extinguished. During the game each player continuously chooses an individual o a species s i controlled by her/him and applies an action rom A i to it. Note that actions on dierent species are never in conlict and the execution order is irrelevant. What is the probability that the population is extinguished i the players ollow optimal strategies? To answer those questions we set up a min-max-msp with one min-maxpolynomial or each species, thereby ollowing [?,4]. The variables X i represent the probability that a population with only a single individual o species s i is extinguished. In the rabbit example we have X adult = X adult + 0.5X adult X baby X at, assuming that the adult rabbits are controlled by the max-player. The probability that an initial population with p i individuals o species s i is extinguished is given by n i=1 ((µ) i) pi. The stochastic termination games o [4, 5, 12] can be considered as extinction games. In the ollowing we present another instance. The primaries game. In order to illustrate the interest o min-max-msps we consider the ollowing stochastic game. Hillary Clinton has to decide her strategy in the primaries. Her team estimates that undecided voters have not yet decided to vote or her or three possible reasons: they consider her (a) cold and calculating, (b) too much part o Washington s establishment, or (c) they listen to Obama s campaign. So the team decides to model those problems as species in an extinction game. The larger the population o a species, the more inluenced is an undecided voter by the problem. The goal o Clinton s team is to maximize the extinction probabilities..mine So the team decides to model the state o an undecided voter by a triple (n a, n b, n c ) o natural numbers indicating the degree to which the voter is inluenced by these three problems (the higher the number, the more inluenced he or she is). The goal o Clinton s team is to bring as many voters as possible to the state (0, 0, 0). I n a > 1, Clinton can tackle problem (a) by either showing emotions or concentrating on her program. I she shows emotions, her team estimates that the voter s state changes by ( 1, 0, 0) with probability 0.3, but with probability 0.7 the action may backire and change it by (+1, 0, 0). I she concentrates on her program, the state changes by ( 1, 0, 0) with probability 0.1, and by ( 1, 0, +1) with probability 0.9. Let X 1 (resp. X 2, X 3 ) denote the probability that the voters s state changes eventually rom (1, 0, 0) (resp. (0, 1, 0), (0, 0, 1)) to (0, 0, 0), assuming Clinton and Obama ollow perect strategies. Then X 1 satisies equation (1) below. I n b > 1, Clinton can choose between ======= Clinton s possible actions or problem (a) are showing emotions or concentrating on her program. I she shows emotions, her team estimates that the individual o problem (a) is removed with probability 0.3, but with probability 0.7 the 5

6 (*I ve added some lines here!*) action backires and produces yet another individual o (a). This and the eect o concentrating on her program can be read o rom Equation (1) below. For problem (b), Clinton can choose between.r1222 concentrating on her voting record or her statement I ll be ready rom day 1. Her team estimates the eect as given in Equation (2). Problem (c) is controlled by Obama, who has the choice between his change message, or attacking Clinton or her position on Iraq, see Equation (3). X a = X 2 a X c (1) X b = X c 0.4X b + 0.3X c (2) X c = 0.5X b + 0.3X 2 b X a + 0.4X a X b + 0.1X b (3).mine What is the probability, assuming perect strategies, that a voter in state (1, 0, 0) is eventually moved to (0, 0, 0)? It is easy to see that it is given by the irst component o the least solution o this mn-max-msp. We are also interested in the strategies by Clinton and Obama that lead to this probability. In the next two sections we show how to eiciently solve these problems. This is an example o an extinction game. We have three populations o problem instances (a state (n a, n b, n c ) indicates that the populations have n a, n b, and n c individuals). Individuals can reproduce or die. The goal o one o the players (Clinton in our case) is to get all populations extinct, while the other player wants to keep them alive. Another example or an extinction game in appendix??. ======= What should Clinton and Obama do? What are the extinction probabilities, assuming perect strategies? In the next sections we show how to eiciently solve these problems..r The τ -Method Assume that denotes a easible min-max-msp. In this section we present our irst method or computing µ approximatively. We call it τ -method. This method computes, or each approximate x (i), the next approximate x (i+1) as the least ixpoint o a piecewise linear approximation L(, x (i) ) x (i) (see below) o at x (i). This approximation is a system o linear min-max-polynomials where all coeicients o monomials o degree 1 are non-negative. Here, we call such a system a monotone linear min-maxsystem (min-max-mls or short). Note that a min-max-mls is not necessarily a minmax-msp, since negative coeicients o monomials o degree 0 are allowed, e.g. the min-max-mls (x 1 ) = x 1 1 is not a min-max-msp. In [8] a min-max-mls is considered as a system o equations (called system o rational equations in [8]) which we denote by X = (X) in vector orm. We identiy a min-max-mls with its interpretation as a unction rom R n to R n (R denotes the complete lattice R {, }). Since is monotone on R n, it has a least ixpoint µ R n which can be computed using the strategy improvement algorithm rom [8]. We now deine the min-max-mls L(, y), a piecewise linear approximation o at y. As a irst step, let us consider a monotone polynomial : R n 0 R 0. Given some approximate y R n 0, a linear approximation L(, y) : Rn R o at y is given by the irst order Taylor approximation at y, i.e., L(, y)(x) := (y) + (y)(x y), x R n. 6

7 This is precisely the linear approximation which is used or Newton s method. Now consider a max-polynomial = 1 k : R n R. We deine the approximation L(, y) : R n R o at y by L(, y) := L( 1, y) L( k, y). We emphasize that in this case, L(, y) is in general not a linear unction but a linear max-polynomial. Accordingly, or a min-msp = 1 k : R n R, we deine L(, y) := L( 1, y) L( k, y). In this case L(, y) is a linear min-polynomial. Finally, or a min-max-msp : R n R n, we deine the approximation L(, y) : R n R n o at y by L(, y) := (L( 1, y),..., L( n, y)) which is a min-max-mls. Example 4. Consider the min-max-msp rom Example 1. The approximation L(, ( 1 2, 1 2 )) is given by L(, ( 1 2, 1 2 ))(x 1, x 2 ) = ( 1 2 x , x 1 2 ). Using the approximation L(, x (i) ) we deine the operator N : R n 0 Rn 0 which gives us, or an approximate x (i), the next approximate x (i+1) by N (x) := µ(l(, x) x), x R n 0. Observe that L(, x) x is still a min-max-mls (at least ater introducing auxiliary variables in order to eliminate components which contain - and -operators). Example 5. In Example 5 we have: N ( 1 2, 1 2 )=µ(l(, ( 1 2, 1 2 )) ( 1 2, 1 2 ) )=( 11 8, 2). We collect basic properties o N in the ollowing lemma: Lemma 3. Let be a easible min-max-msp and x, y R n 0. Then: 1. x, (x) N (x); 2. x = N (x) whenever x = (x); 3. (Monotonicity o N ) N (x) N (y) whenever x y; 4. N (x) (N (x)) whenever x (x); 5. N (x) N σ(x) or every -strategy σ Σ; 6. N (x) N π(x) or every -strategy π Π; 7. N (x) = N π(x) or some -strategy π Π. In particular Lemma 3 implies that the least ixpoint o N is equal to the least ixpoint o. Moreover, iteration based on N is at least as ast as Kleene iteration. We thereore use this operator or computing approximates to the least ixpoint. Formally, we deine: Deinition 1. We call the sequence (τ (k) (k) ) o approximates deined by τ := N k(0) or k N the τ -sequence or. We drop the subscript i it is clear rom the context. Proposition 1. Let be a easible min-max-msp. The τ -sequence (τ (k) ) or (see deinition 1) is monotonically increasing, bounded rom above by µ, and converges to µ. Moreover, κ (k) τ (k) or all k N. We now show that the new approximation method converges at least linearly to the least ixpoint. Theorem 6.2 o [2] implies the ollowing lemma about the convergence o Newton s method or MSPs, i.e., systems without maxima and minima. Lemma 4. Let be a easible quadratic MSP. The sequence (τ (k) ) k N converges linearly to µ. More precisely, there is a k N such that τ (k +i (n+1) 2 n) has at least i valid bits o µ or every i N. 7

8 We emphasize that linear convergence is the worst case. In many practical examples, in particular i the matrix I (µ) is invertible, Newton s method converges exponentially. We mean by this that the number o accurate bits o the approximation grows exponentially in the number o iterations. As a irst step towards our main result or this section, we use Lemma 4 to show that our approximation method converges linearly whenever is a max-msps. In this case we obtain the same convergence speed as or MSPs. Lemma 5. Let be a easible max-msp. Let M := {σ Σ µ σ = µ}. The set M is non-empty and τ (i) τ (i) σ or all σ M and i N. Proo. Theorem 1 implies that there exists a -strategy σ Σ such that µ σ = µ. Thus M is non-empty. Let σ M. By induction on k Lemma 3 implies τ (k) N k (0) N k = (k) σ(0) = τ σ or every k N. Combining Lemma 4 and Lemma?? we get linear convergence or max-msps: Theorem 2. Let be a easible quadratic max-msp. The τ -sequence (τ (k) ) or (see deinition 1) converges linearly to µ. More precisely, there is a k N such that τ (k +i (n+1) 2 n) has at least i valid bits o µ or every i N. A direct consequence o Lemma?? is that the τ -sequence (τ (i) ) converges exponentially i (τ (i) σ) converges exponentially or some σ Σ with µ σ = µ. This is in particular the case i the matrix I ( σ ) (µ) is invertible. In order to extend this result to min-max-msps we state the ollowing lemma which enables us to relate the sequence (τ (i) (i) ) to the sequences (τ π) where µ π = µ. Lemma 6. Let be a easible min-max-msp and m denote the number o strategies π Π with µ = µ π. There is a constant k N such that or all i N there exists some strategy π Π with µ = µ π and τ (i) τ (k+m i) π. We now present the main result o this section which states that our approximation method converges at least linearly also in the general case, i.e., or min-max-msps. Theorem 3. Let be a easible quadratic min-max-msp and m denote the number o strategies π Π with µ = µ π. The τ -sequence (τ (k) ) or (see deinition 1) converges linearly to µ. More precisely, there is a k N such that τ (k +i m (n+1) 2 n ) has at least i valid bits o µ or every i N. The upper bound on the convergence rate provided by Theorem 2 is by the actor m worse than the upper bound obtained or MSPs. Since m is the number o strategies π Π with µ π = µ, m is trivially bounded by Π but is usually much smaller. The τ -sequence (τ (i) (i) ) converges exponentially whenever (τ π) converges exponentially or every π with µ π = µ (see [?]). The latter condition is typically satisied (see the discussion ater Theorem 2). In order to determine the approximate τ (i+1) = N (τ (i) ) rom τ (i) we must compute the least ixpoint o the min-max-mls L(, τ (i) ) τ (i). This can be done by using the strategy improvement algorithm rom [8]. The algorithm iterates over -strategies. 8

9 For each strategy it solves a linear program or alternatively iterates over -strategies. The number o -strategies used by this algorithm is trivially bounded by the number o -strategies or L(, τ (i) ) τ (i) which is exponentially in the number o -expressions occurring in L(, τ (i) ) τ (i). However, we do not know an example or which the algorithm considers more than linearly many strategies. 6 The ν-method The τ -method, presented in the previous section, uses strategy iteration over - strategies to compute N (y). This could be expensive, as there may be exponentially many -strategies. Thereore, we derive an alternative generalization o Newton s method that in each step picks the currently most promising -strategy directly, without strategy iteration. Consider again a ixed easible min-max-msp whose least ixpoint we want to approximate. Assume that y is some approximation o µ. Instead o applying N to y, as the τ -method, we now choose a strategy σ Σ such that (y) = σ (y), and compute N σ(y), where N σ was deined in Section 5 as N σ(y) := µ(l( σ, y) y). In the ollowing we write N σ instead o N σ i is understood. Assume or a moment that is a max-msp and that there is a unique σ Σ such that (y) = σ (y). The approximant N σ (y) is the result o applying one iteration o Newton s method, because L( σ, y) is not only a linearization o σ, but the irst order Taylor approximation o at y. More precisely, L( σ, y)(x) = (y)+ (y) (x y), and N σ (y) is obtained by solving x = L( σ, y)(x). In this sense, the ν-method is a more direct generalization o Newton s method than the τ -method. Formally, we deine the ν-method by a sequence o approximates, the ν-sequence. Deinition 2 (ν-sequence). A sequence (ν (k) ) k N is called ν-sequence o a min-max- MSP i ν (0) σ(k) (ν (k) = 0 and or each k there is a strategy σ (k) ) and ν(k+1) = N σ (k) Σ with (ν (k) ) = (ν (k) ). We may drop the subscript i is understood. Notice the nondeterminism here i there is more than one -strategy that attains (ν (k) ). The ollowing proposition is analogous to Proposition 1 and states some basic properties o ν-sequences. Proposition 2. Let be a easible min-max-msp. The sequence (ν (k) ) is monotonically increasing, bounded rom above by µ, and converges to µ. More precisely, we have κ (k) ν (k) (ν (k) ) ν (k+1) µ or all k N. The goal o this section is again to strengthen Proposition 2 towards quantitative convergence results or ν-sequences. To achieve this goal we again relate the convergence o ν-sequences to the convergence o Newton s method or MSPs. I is an MSP, Lemma 4 allows to argue about the Newton operator N when applied to approximates x µ. To transer this result to min-max-msps we need an invariant like ν (k) µ σ(k) or ν-sequences. As a irst step to such an invariant we urther restrict the selection o the σ (k). Roughly speaking, the strategy in a component i is only changed when it is immediate that component i has not yet reached its ixpoint. 9

10 Algorithm 1 lazy ν-method procedure lazy-ν(, k) assumes: is a min-max-msp returns: ν (k), σ (k) obtained by k iterations o the lazy ν-method ν 0 σ any σ Σ such that (0) = σ (0) or i rom 1 to k do ν N σ (ν) σ lazy strategy update rom ν and σ od return ν, σ procedure N (y) assumes: is a min-msp, y R n 0 returns: µ(l(, y) y) g linear min-msp with g(d) = L(, y)(y + d) y u κ (n) g eg (eg 1,... j, eg n) where 0 i ui = 0 eg i = g i i u i > 0 d maximize x x n subject to 0 x eg(x) by 1 LP return y + d Deinition 3 (lazy strategy update). Let x σ (x) or a σ Σ. We say that σ Σ is obtained rom x and σ by a lazy strategy update i (x) = σ (x) and σ ( i ) = σ( i ) holds or all components i with i (x) = x i. We call a ν-sequence (ν (k) ) k N lazy i or all k, the strategy σ (k) is obtained rom ν (k) and σ (k 1) by a lazy strategy update. The key property o lazy ν-sequences is the ollowing non-trivial invariant. Lemma 7. Let (ν (k) ) k N be a lazy ν-sequence. Then ν (k) µ σ(k)π holds or all k N and all π Π. The ollowing example shows that lazy strategy updates are essential to Lemma 6 even or max-msps. Example 6. Consider the MSP (x, y) = ( 1 2 x, xy ). Let σ(0) ( 1 2 x) = 1 2 and σ (1) ( 1 2 x) = x. Then there is a ν-sequence (ν(k) ) with ν (0) = 0, ν (1) = N σ (0)(0) = ( 1 2, 0), ν(2) = N σ (1)(ν (1) ). However, the conclusion o Lemma 6 does not hold, because ( 1 2, 0) = ν(1) µ σ(1) = (0, 1 2 ). Notice that σ(1) is not obtained by a lazy strategy update, as 1 (ν (1) ) = ν (1) 1. Lemma 6 alls short o our subgoal to establish ν (k) µ σ(k), because Π \ Π might be non-empty. In act, we provide an example in [?] showing that ν (k) µ σ(k)π does not always hold or all π Π, even when σ(k)π is easible. Luckily, Lemma 6 will suice or our convergence speed result. The let procedure o Algorithm 1 summarizes the lazy ν-method which works by computing lazy ν-sequences. The ollowing lemma relates the ν-method or min-max- MSPs to Newton s method or MSPs. Lemma 8. Let be a easible min-max-msp and (ν (k) ) a lazy ν-sequence. Let m be the number o strategy pairs (σ, π) Σ Π with µ = µ σπ. Then m 1 and there is a constant k as N such that, or all k N, there exist strategies σ Σ, π Π with µ = µ σπ (kas +m k) and ν τ (k) σπ. In typical cases, i.e., i I ( σπ ) (µ) is invertible or all σ Σ and π Π with µ σπ = µ, Newton s method converges exponentially. The ollowing theorem captures the worst-case, in which the lazy ν-method still converges linearly. 10

11 Theorem 4. Let be a quadratic easible min-max-msp. The lazy ν-sequence (ν (k) ) k N converges linearly to µ. More precisely, let m be the number o strategy pairs (σ, π) Σ Π with µ = µ σπ. Then there is a k N such that ν (k +i m (n+1) 2 n) has at least i valid bits o µ or every i N. Next we show that N σ(y) can be computed exactly by solving a single LP. The right procedure o Algorithm 1 accomplishes this by taking advantage o the ollowing proposition which states that N σ(y) can be determined by computing the least ixpoint o some linear min-msp g. Proposition 3. Let y σ (y) µ. Then N σ(y) = y +µg or the linear min-msp g with g(d) = L( σ, y)(y + d) y. Ater having computed the linear min-msp g, Algorithm 1 determines the 0- components o µg. This can be done by perorming n Kleene steps, since (µg) i = 0 whenever (κ (n) g ) i = 0. Let g be the linear min-msp obtained rom g by substituting the constant 0 or all components g i with (µg) i = 0. The least ixpoint o g can be computed by solving a single LP, as implied by the ollowing lemma. The correctness o Algorithm 1 ollows. Lemma 9. Let g be a linear min-msp such that g i = 0 whenever (µg) i = 0 or all components i. Then µg is the greatest vector x with x g(x). The ollowing theorem is a direct consequence o Lemma 6 or the case where Π = Π. It shows the second major advantage o the lazy ν-method, namely, that that the strategies σ (k) are meaningul in terms o games. Theorem 5. Let Π = Π. Let (ν (k) ) k N be a lazy ν-sequence. Then ν (k) µ σ(k) holds or all k N. As (ν (k) ) converges to µ, the max-strategy σ (k) can be considered ɛ-optimal. In terms o games, Theorem 5 states that the strategy σ (k) guarantees the max-player an outcome o at least ν (k). It is open whether an analogous theorem holds or the τ -method. Application to the primaries example. We solved the equation system o Section 4 approximatively by perorming 5 iterations o the lazy ν-method. Using Theorem 5 we ound that Clinton can extinguish a problem (a) individual with a probability o at least X a = by concentrating on her program and her ready rom day 1 message. (More than 70 Kleene iterations would be needed to iner that X a is at least 0.49.) As ν (5) seems to solve above equation system quite well in the sense that (ν (5) ) ν (5) is small, we are pretty sure about Obama s optimal strategy: he should talk about Iraq. As ν (2) X 1 > 0.38 and σ (2) maps 1 to X1 2, Clinton s team can use Theorem 5 to iner that X a 0.38 by showing emotions and using her ready rom day 1 message. 7 Discussion In order to compare our two methods in terms o convergence speed, assume that denotes a easible min-max-msp. Since N (x) N σ (Lemma 3.5), it ollows 11

12 that τ (i) ν (i) holds or all i N. This means that the τ -method is as least as ast as the ν-method i one counts the number o approximation steps. Next, we construct an example which shows that the number o approximation steps needed by the lazy ν-method can be much larger than the respective number needed by the τ -method. It is parameterized with an arbitrary k N and given by (x 1, x 2 ) = ( x2 2, x x (k+1)). Since the constant 2 2(k+1) is represented using O(k) bits, it is o size linear in k. It can be shown (see [?]) that the lazy ν- method needs at least k steps. More precisely, ν τ (k) (1.5, 1.95). The τ -method needs exactly 2 steps. We now compare our approaches with the tool PReMo [12]. PReMo employs 4 dierent techniques to approximate µ or min-max-msps : It uses Newton s method only or MSPs without min or max. In this case both o our methods coincide with Newton s method. For min-max-msps, PReMo uses Kleene iteration, round-robin iteration (called Gauss-Seidel in [12]), and an optimistic variant o Kleene which is not guaranteed to converge. In the ollowing we compare our algorithms only with Kleene iteration, as our algorithms are guaranteed to converge and a round-robin step is not aster than n Kleene steps. Our methods improve on Kleene iteration in the sense that κ (i) τ (i), ν (i) holds or all i N, and our methods converge linearly, whereas Kleene iteration does not converge linearly in general. For example, consider the MSP g(x) = 1 2 x with µg = 1. Kleene iteration needs exponentially many iterations or j bits [3], whereas Newton s method gives exactly 1 bit per iteration. For the slightly modiied MSP g(x) = g(x) 1 which has the same ixpoint, PReMo no longer uses Newton s method, as g contains a minimum. Our algorithms still produce exactly 1 bit per iteration. In the case o linear min-max systems our methods compute the precise solution and not only an approximation. This applies, or example, to the max-linear system o [12] describing the expected time o termination o a nondeterministic variant o Quicksort. Notice that Kleene iteration does not compute the precise solution (except or trivial instances), even or linear MSPs without min or max. We implemented our algorithms prototypically in Maple and ran them on the quadratic nonlinear min-max-msp describing the termination probabilities o a recursive simple stochastic game. This game stems rom the example suite o PReMo (rssg2.c) and we used PReMo to produce the equations. Both o our algorithms reached the least ixpoint ater 2 iterations. So we could compute the precise µ and optimal strategies or both players, whereas PReMo computes only approximations o µ. 8 Conclusion We have presented the irst methods or approximatively computing the least ixpoint o min-max-msps, which are guaranteed to converge at least linearly. Both o them are generalizations o Newton s method. Whereas the τ -method converges aster in terms o number o approximation steps, one approximation step o the ν-method is cheaper. Furthermore, we have shown that the ν-method computes ɛ-optimal strategies or games. Whether such a result can also be established or the τ -method is still open. A direction or uture research is to evaluate our methods in practice. In particular, the 12

13 inluence o imprecise computation through loating point arithmetic should be studied. It would also be desirable to ind a bound on the threshold k. Acknowledgement. We thank the anonymous reerees or valuable comments. Reerences 1. A. Condon. The complexity o stochastic games. In. and Comp., 96(2): , J. Esparza, S. Kieer, and M. Luttenberger. Convergence thresholds o Newton s method or monotone polynomial equations. In Proceedings o STACS 2008, to appear. 3. K. Etessami and M. Yannakakis. Recursive Markov chains, stochastic grammars, and monotone systems o nonlinear equations. In STACS, pages , K. Etessami and M. Yannakakis. Recursive Markov decision processes and recursive stochastic games. In Proc. ICALP 2005, volume 3580 o LNCS. Springer, K. Etessami and M. Yannakakis. Eicient qualitative analysis o classes o recursive Markov decision processes and simple stochastic games. In STACS, pages , J. Filar and K. Vrieze. Competitive Markov Decision processes. Springer, T. Gawlitza and H. Seidl. Precise ixpoint computation through strategy iteration. In European Symposium on Programming (ESOP), LNCS 4421, pages Springer, T. Gawlitza and H. Seidl. Precise relational invariants through strategy iteration. In J. Duparc and T. A. Henzinger, editors, CSL, volume 4646 o LNCS, pages Springer, S. Kieer, M. Luttenberger, and J. Esparza. On the convergence o Newton s method or monotone systems o polynomial equations. In STOC, pages ACM, A. Neyman and S. Sorin. Stochastic Games and Applications. Kluwer Academic Press, J. Ortega and W. Rheinboldt. Iterative solution o nonlinear equations in several variables. Academic Press, D. Wojtczak and K. Etessami. PReMo: An analyzer or probabilistic recursive models. In Proc. o TACAS, volume 4424 o LNCS, pages Springer,

Approximative Methods for Monotone Systems of min-max-polynomial Equations

Approximative Methods for Monotone Systems of min-max-polynomial Equations Approximative Methods or Monotone Systems o min-max-polynomial Equations Javier Esparza, Thomas Gawlitza, Stean Kieer, and Helmut Seidl Institut ür Inormatik Technische Universität München, Germany {esparza,gawlitza,kieer,seidl}@in.tum.de

More information

Solving Monotone Polynomial Equations

Solving Monotone Polynomial Equations Solving Monotone Polynomial Equations Javier Esparza, Stean Kieer, and Michael Luttenberger Institut ür Inormatik, Technische Universität München, 85748 Garching, Germany {esparza,kieer,luttenbe}@in.tum.de

More information

On High-Rate Cryptographic Compression Functions

On High-Rate Cryptographic Compression Functions On High-Rate Cryptographic Compression Functions Richard Ostertág and Martin Stanek Department o Computer Science Faculty o Mathematics, Physics and Inormatics Comenius University Mlynská dolina, 842 48

More information

YURI LEVIN AND ADI BEN-ISRAEL

YURI LEVIN AND ADI BEN-ISRAEL Pp. 1447-1457 in Progress in Analysis, Vol. Heinrich G W Begehr. Robert P Gilbert and Man Wah Wong, Editors, World Scientiic, Singapore, 003, ISBN 981-38-967-9 AN INVERSE-FREE DIRECTIONAL NEWTON METHOD

More information

On Fixed Point Equations over Commutative Semirings

On Fixed Point Equations over Commutative Semirings On Fixed Point Equations over Commutative Semirings Javier Esparza, Stefan Kiefer, and Michael Luttenberger Universität Stuttgart Institute for Formal Methods in Computer Science Stuttgart, Germany {esparza,kiefersn,luttenml}@informatik.uni-stuttgart.de

More information

Finger Search in the Implicit Model

Finger Search in the Implicit Model Finger Search in the Implicit Model Gerth Stølting Brodal, Jesper Sindahl Nielsen, Jakob Truelsen MADALGO, Department o Computer Science, Aarhus University, Denmark. {gerth,jasn,jakobt}@madalgo.au.dk Abstract.

More information

1 Relative degree and local normal forms

1 Relative degree and local normal forms THE ZERO DYNAMICS OF A NONLINEAR SYSTEM 1 Relative degree and local normal orms The purpose o this Section is to show how single-input single-output nonlinear systems can be locally given, by means o a

More information

Telescoping Decomposition Method for Solving First Order Nonlinear Differential Equations

Telescoping Decomposition Method for Solving First Order Nonlinear Differential Equations Telescoping Decomposition Method or Solving First Order Nonlinear Dierential Equations 1 Mohammed Al-Reai 2 Maysem Abu-Dalu 3 Ahmed Al-Rawashdeh Abstract The Telescoping Decomposition Method TDM is a new

More information

Supplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values

Supplementary material for Continuous-action planning for discounted infinite-horizon nonlinear optimal control with Lipschitz values Supplementary material or Continuous-action planning or discounted ininite-horizon nonlinear optimal control with Lipschitz values List o main notations x, X, u, U state, state space, action, action space,

More information

Precise Relational Invariants Through Strategy Iteration

Precise Relational Invariants Through Strategy Iteration Precise Relational Invariants Through Strategy Iteration Thomas Gawlitza and Helmut Seidl TU München, Institut für Informatik, I2 85748 München, Germany {gawlitza, seidl}@in.tum.de Abstract. We present

More information

(C) The rationals and the reals as linearly ordered sets. Contents. 1 The characterizing results

(C) The rationals and the reals as linearly ordered sets. Contents. 1 The characterizing results (C) The rationals and the reals as linearly ordered sets We know that both Q and R are something special. When we think about about either o these we usually view it as a ield, or at least some kind o

More information

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction

2. ETA EVALUATIONS USING WEBER FUNCTIONS. Introduction . ETA EVALUATIONS USING WEBER FUNCTIONS Introduction So ar we have seen some o the methods or providing eta evaluations that appear in the literature and we have seen some o the interesting properties

More information

Finding all Bessel type solutions for Linear Differential Equations with Rational Function Coefficients

Finding all Bessel type solutions for Linear Differential Equations with Rational Function Coefficients Finding all essel type solutions or Linear Dierential Equations with Rational Function Coeicients Mark van Hoeij & Quan Yuan Florida State University, Tallahassee, FL 06-07, USA hoeij@math.su.edu & qyuan@math.su.edu

More information

TESTING TIMED FINITE STATE MACHINES WITH GUARANTEED FAULT COVERAGE

TESTING TIMED FINITE STATE MACHINES WITH GUARANTEED FAULT COVERAGE TESTING TIMED FINITE STATE MACHINES WITH GUARANTEED FAULT COVERAGE Khaled El-Fakih 1, Nina Yevtushenko 2 *, Hacene Fouchal 3 1 American University o Sharjah, PO Box 26666, UAE kelakih@aus.edu 2 Tomsk State

More information

A CS look at stochastic branching processes

A CS look at stochastic branching processes A computer science look at stochastic branching processes T. Brázdil J. Esparza S. Kiefer M. Luttenberger Technische Universität München For Thiagu, December 2008 Back in victorian Britain... There was

More information

Probabilistic Observations and Valuations (Extended Abstract) 1

Probabilistic Observations and Valuations (Extended Abstract) 1 Replace this ile with prentcsmacro.sty or your meeting, or with entcsmacro.sty or your meeting. Both can be ound at the ENTCS Macro Home Page. Probabilistic Observations and Valuations (Extended Abstract)

More information

Thu June 16 Lecture Notes: Lattice Exercises I

Thu June 16 Lecture Notes: Lattice Exercises I Thu June 6 ecture Notes: attice Exercises I T. Satogata: June USPAS Accelerator Physics Most o these notes ollow the treatment in the class text, Conte and MacKay, Chapter 6 on attice Exercises. The portions

More information

Definition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series.

Definition: Let f(x) be a function of one variable with continuous derivatives of all orders at a the point x 0, then the series. 2.4 Local properties o unctions o several variables In this section we will learn how to address three kinds o problems which are o great importance in the ield o applied mathematics: how to obtain the

More information

Web Appendix for The Value of Switching Costs

Web Appendix for The Value of Switching Costs Web Appendix or The Value o Switching Costs Gary Biglaiser University o North Carolina, Chapel Hill Jacques Crémer Toulouse School o Economics (GREMAQ, CNRS and IDEI) Gergely Dobos Gazdasági Versenyhivatal

More information

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares

Scattered Data Approximation of Noisy Data via Iterated Moving Least Squares Scattered Data Approximation o Noisy Data via Iterated Moving Least Squares Gregory E. Fasshauer and Jack G. Zhang Abstract. In this paper we ocus on two methods or multivariate approximation problems

More information

Newtonian Program Analysis

Newtonian Program Analysis Newtonian Program Analysis Javier Esparza Technische Universität München Joint work with Stefan Kiefer and Michael Luttenberger From programs to flowgraphs proc X 1 proc X 2 proc X 3 a c d f X 1 b

More information

Published in the American Economic Review Volume 102, Issue 1, February 2012, pages doi: /aer

Published in the American Economic Review Volume 102, Issue 1, February 2012, pages doi: /aer Published in the American Economic Review Volume 102, Issue 1, February 2012, pages 594-601. doi:10.1257/aer.102.1.594 CONTRACTS VS. SALARIES IN MATCHING FEDERICO ECHENIQUE Abstract. Firms and workers

More information

Figure 1: Bayesian network for problem 1. P (A = t) = 0.3 (1) P (C = t) = 0.6 (2) Table 1: CPTs for problem 1. (c) P (E B) C P (D = t) f 0.9 t 0.

Figure 1: Bayesian network for problem 1. P (A = t) = 0.3 (1) P (C = t) = 0.6 (2) Table 1: CPTs for problem 1. (c) P (E B) C P (D = t) f 0.9 t 0. Probabilistic Artiicial Intelligence Problem Set 3 Oct 27, 2017 1. Variable elimination In this exercise you will use variable elimination to perorm inerence on a bayesian network. Consider the network

More information

VALUATIVE CRITERIA FOR SEPARATED AND PROPER MORPHISMS

VALUATIVE CRITERIA FOR SEPARATED AND PROPER MORPHISMS VALUATIVE CRITERIA FOR SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN Recall that or prevarieties, we had criteria or being a variety or or being complete in terms o existence and uniqueness o limits, where

More information

A NOTE ON HENSEL S LEMMA IN SEVERAL VARIABLES

A NOTE ON HENSEL S LEMMA IN SEVERAL VARIABLES PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 125, Number 11, November 1997, Pages 3185 3189 S 0002-9939(97)04112-9 A NOTE ON HENSEL S LEMMA IN SEVERAL VARIABLES BENJI FISHER (Communicated by

More information

Convergence Thresholds of Newton s Method for Monotone Polynomial Equations

Convergence Thresholds of Newton s Method for Monotone Polynomial Equations Convergence Thresholds of Newton s Method for Monotone Polynomial Equations Javier Esparza, Stefan Kiefer, Michael Luttenberger To cite this version: Javier Esparza, Stefan Kiefer, Michael Luttenberger.

More information

Objectives. By the time the student is finished with this section of the workbook, he/she should be able

Objectives. By the time the student is finished with this section of the workbook, he/she should be able FUNCTIONS Quadratic Functions......8 Absolute Value Functions.....48 Translations o Functions..57 Radical Functions...61 Eponential Functions...7 Logarithmic Functions......8 Cubic Functions......91 Piece-Wise

More information

Newtonian Program Analysis: Solving Sharir and Pnueli s Equations

Newtonian Program Analysis: Solving Sharir and Pnueli s Equations Newtonian Program Analysis: Solving Sharir and Pnueli s Equations Javier Esparza Technische Universität München Joint work with Stefan Kiefer and Michael Luttenberger From programs to equations: Intraprocedural

More information

Symbolic-Numeric Methods for Improving Structural Analysis of DAEs

Symbolic-Numeric Methods for Improving Structural Analysis of DAEs Symbolic-Numeric Methods or Improving Structural Analysis o DAEs Guangning Tan, Nedialko S. Nedialkov, and John D. Pryce Abstract Systems o dierential-algebraic equations (DAEs) are generated routinely

More information

Algorithms and Complexity Results for Exact Bayesian Structure Learning

Algorithms and Complexity Results for Exact Bayesian Structure Learning Algorithms and Complexity Results or Exact Bayesian Structure Learning Sebastian Ordyniak and Stean Szeider Institute o Inormation Systems Vienna University o Technology, Austria Abstract Bayesian structure

More information

Lecture 8 Optimization

Lecture 8 Optimization 4/9/015 Lecture 8 Optimization EE 4386/5301 Computational Methods in EE Spring 015 Optimization 1 Outline Introduction 1D Optimization Parabolic interpolation Golden section search Newton s method Multidimensional

More information

The intrinsic structure of FFT and a synopsis of SFT

The intrinsic structure of FFT and a synopsis of SFT Global Journal o Pure and Applied Mathematics. ISSN 0973-1768 Volume 12, Number 2 (2016), pp. 1671-1676 Research India Publications http://www.ripublication.com/gjpam.htm The intrinsic structure o FFT

More information

Finite Dimensional Hilbert Spaces are Complete for Dagger Compact Closed Categories (Extended Abstract)

Finite Dimensional Hilbert Spaces are Complete for Dagger Compact Closed Categories (Extended Abstract) Electronic Notes in Theoretical Computer Science 270 (1) (2011) 113 119 www.elsevier.com/locate/entcs Finite Dimensional Hilbert Spaces are Complete or Dagger Compact Closed Categories (Extended bstract)

More information

Simpler Functions for Decompositions

Simpler Functions for Decompositions Simpler Functions or Decompositions Bernd Steinbach Freiberg University o Mining and Technology, Institute o Computer Science, D-09596 Freiberg, Germany Abstract. This paper deals with the synthesis o

More information

Math 216A. A gluing construction of Proj(S)

Math 216A. A gluing construction of Proj(S) Math 216A. A gluing construction o Proj(S) 1. Some basic deinitions Let S = n 0 S n be an N-graded ring (we ollows French terminology here, even though outside o France it is commonly accepted that N does

More information

Roberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points

Roberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 1. Extreme points Roberto s Notes on Dierential Calculus Chapter 8: Graphical analysis Section 1 Extreme points What you need to know already: How to solve basic algebraic and trigonometric equations. All basic techniques

More information

( x) f = where P and Q are polynomials.

( x) f = where P and Q are polynomials. 9.8 Graphing Rational Functions Lets begin with a deinition. Deinition: Rational Function A rational unction is a unction o the orm ( ) ( ) ( ) P where P and Q are polynomials. Q An eample o a simple rational

More information

STAT 801: Mathematical Statistics. Hypothesis Testing

STAT 801: Mathematical Statistics. Hypothesis Testing STAT 801: Mathematical Statistics Hypothesis Testing Hypothesis testing: a statistical problem where you must choose, on the basis o data X, between two alternatives. We ormalize this as the problem o

More information

Circuit Complexity / Counting Problems

Circuit Complexity / Counting Problems Lecture 5 Circuit Complexity / Counting Problems April 2, 24 Lecturer: Paul eame Notes: William Pentney 5. Circuit Complexity and Uniorm Complexity We will conclude our look at the basic relationship between

More information

Numerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods

Numerical Methods - Lecture 2. Numerical Methods. Lecture 2. Analysis of errors in numerical methods Numerical Methods - Lecture 1 Numerical Methods Lecture. Analysis o errors in numerical methods Numerical Methods - Lecture Why represent numbers in loating point ormat? Eample 1. How a number 56.78 can

More information

Edinburgh Research Explorer

Edinburgh Research Explorer Edinburgh Research Explorer Recursive Concurrent Stochastic Games Citation for published version: Etessami, K & Yannakakis, M 2006, 'Recursive Concurrent Stochastic Games'. in ICALP (2). Springer- Verlag

More information

Fluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs

Fluctuationlessness Theorem and its Application to Boundary Value Problems of ODEs Fluctuationlessness Theorem and its Application to Boundary Value Problems o ODEs NEJLA ALTAY İstanbul Technical University Inormatics Institute Maslak, 34469, İstanbul TÜRKİYE TURKEY) nejla@be.itu.edu.tr

More information

Semideterministic Finite Automata in Operational Research

Semideterministic Finite Automata in Operational Research Applied Mathematical Sciences, Vol. 0, 206, no. 6, 747-759 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/0.2988/ams.206.62 Semideterministic Finite Automata in Operational Research V. N. Dumachev and

More information

18-660: Numerical Methods for Engineering Design and Optimization

18-660: Numerical Methods for Engineering Design and Optimization 8-66: Numerical Methods or Engineering Design and Optimization Xin Li Department o ECE Carnegie Mellon University Pittsburgh, PA 53 Slide Overview Linear Regression Ordinary least-squares regression Minima

More information

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x)

(One Dimension) Problem: for a function f(x), find x 0 such that f(x 0 ) = 0. f(x) Solving Nonlinear Equations & Optimization One Dimension Problem: or a unction, ind 0 such that 0 = 0. 0 One Root: The Bisection Method This one s guaranteed to converge at least to a singularity, i not

More information

RATIONAL FUNCTIONS. Finding Asymptotes..347 The Domain Finding Intercepts Graphing Rational Functions

RATIONAL FUNCTIONS. Finding Asymptotes..347 The Domain Finding Intercepts Graphing Rational Functions RATIONAL FUNCTIONS Finding Asymptotes..347 The Domain....350 Finding Intercepts.....35 Graphing Rational Functions... 35 345 Objectives The ollowing is a list o objectives or this section o the workbook.

More information

Problem Set. Problems on Unordered Summation. Math 5323, Fall Februray 15, 2001 ANSWERS

Problem Set. Problems on Unordered Summation. Math 5323, Fall Februray 15, 2001 ANSWERS Problem Set Problems on Unordered Summation Math 5323, Fall 2001 Februray 15, 2001 ANSWERS i 1 Unordered Sums o Real Terms In calculus and real analysis, one deines the convergence o an ininite series

More information

A Faster Randomized Algorithm for Root Counting in Prime Power Rings

A Faster Randomized Algorithm for Root Counting in Prime Power Rings A Faster Randomized Algorithm or Root Counting in Prime Power Rings Leann Kopp and Natalie Randall July 17, 018 Abstract Let p be a prime and Z[x] a polynomial o degree d such that is not identically zero

More information

Syllabus Objective: 2.9 The student will sketch the graph of a polynomial, radical, or rational function.

Syllabus Objective: 2.9 The student will sketch the graph of a polynomial, radical, or rational function. Precalculus Notes: Unit Polynomial Functions Syllabus Objective:.9 The student will sketch the graph o a polynomial, radical, or rational unction. Polynomial Function: a unction that can be written in

More information

GENERALIZED ABSTRACT NONSENSE: CATEGORY THEORY AND ADJUNCTIONS

GENERALIZED ABSTRACT NONSENSE: CATEGORY THEORY AND ADJUNCTIONS GENERALIZED ABSTRACT NONSENSE: CATEGORY THEORY AND ADJUNCTIONS CHRIS HENDERSON Abstract. This paper will move through the basics o category theory, eventually deining natural transormations and adjunctions

More information

Strong Lyapunov Functions for Systems Satisfying the Conditions of La Salle

Strong Lyapunov Functions for Systems Satisfying the Conditions of La Salle 06 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 49, NO. 6, JUNE 004 Strong Lyapunov Functions or Systems Satisying the Conditions o La Salle Frédéric Mazenc and Dragan Ne sić Abstract We present a construction

More information

Root Finding and Optimization

Root Finding and Optimization Root Finding and Optimization Ramses van Zon SciNet, University o Toronto Scientiic Computing Lecture 11 February 11, 2014 Root Finding It is not uncommon in scientiic computing to want solve an equation

More information

VALUATIVE CRITERIA BRIAN OSSERMAN

VALUATIVE CRITERIA BRIAN OSSERMAN VALUATIVE CRITERIA BRIAN OSSERMAN Intuitively, one can think o separatedness as (a relative version o) uniqueness o limits, and properness as (a relative version o) existence o (unique) limits. It is not

More information

MODULE 6 LECTURE NOTES 1 REVIEW OF PROBABILITY THEORY. Most water resources decision problems face the risk of uncertainty mainly because of the

MODULE 6 LECTURE NOTES 1 REVIEW OF PROBABILITY THEORY. Most water resources decision problems face the risk of uncertainty mainly because of the MODULE 6 LECTURE NOTES REVIEW OF PROBABILITY THEORY INTRODUCTION Most water resources decision problems ace the risk o uncertainty mainly because o the randomness o the variables that inluence the perormance

More information

Lecture notes for Analysis of Algorithms : Markov decision processes

Lecture notes for Analysis of Algorithms : Markov decision processes Lecture notes for Analysis of Algorithms : Markov decision processes Lecturer: Thomas Dueholm Hansen June 6, 013 Abstract We give an introduction to infinite-horizon Markov decision processes (MDPs) with

More information

Using Language Inference to Verify omega-regular Properties

Using Language Inference to Verify omega-regular Properties Using Language Inerence to Veriy omega-regular Properties Abhay Vardhan, Koushik Sen, Mahesh Viswanathan, Gul Agha Dept. o Computer Science, Univ. o Illinois at Urbana-Champaign, USA {vardhan,ksen,vmahesh,agha}@cs.uiuc.edu

More information

Online Appendix: The Continuous-type Model of Competitive Nonlinear Taxation and Constitutional Choice by Massimo Morelli, Huanxing Yang, and Lixin Ye

Online Appendix: The Continuous-type Model of Competitive Nonlinear Taxation and Constitutional Choice by Massimo Morelli, Huanxing Yang, and Lixin Ye Online Appendix: The Continuous-type Model o Competitive Nonlinear Taxation and Constitutional Choice by Massimo Morelli, Huanxing Yang, and Lixin Ye For robustness check, in this section we extend our

More information

SEPARATED AND PROPER MORPHISMS

SEPARATED AND PROPER MORPHISMS SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN Last quarter, we introduced the closed diagonal condition or a prevariety to be a prevariety, and the universally closed condition or a variety to be complete.

More information

Curve Sketching. The process of curve sketching can be performed in the following steps:

Curve Sketching. The process of curve sketching can be performed in the following steps: Curve Sketching So ar you have learned how to ind st and nd derivatives o unctions and use these derivatives to determine where a unction is:. Increasing/decreasing. Relative extrema 3. Concavity 4. Points

More information

A Simple Explanation of the Sobolev Gradient Method

A Simple Explanation of the Sobolev Gradient Method A Simple Explanation o the Sobolev Gradient Method R. J. Renka July 3, 2006 Abstract We have observed that the term Sobolev gradient is used more oten than it is understood. Also, the term is oten used

More information

Quasi-Birth-Death Processes, Tree-Like QBDs, Probabilistic 1-Counter Automata, and Pushdown Systems

Quasi-Birth-Death Processes, Tree-Like QBDs, Probabilistic 1-Counter Automata, and Pushdown Systems Quasi-Birth-Death Processes, Tree-Like QBDs, Probabilistic 1-Counter Automata, and Pushdown Systems Kousha Etessami U. of Edinburgh kousha@inf.ed.ac.uk Dominik Wojtczak U. of Edinburgh D.K.Wojtczak@sms.ed.ac.uk

More information

Maximum Flow. Reading: CLRS Chapter 26. CSE 6331 Algorithms Steve Lai

Maximum Flow. Reading: CLRS Chapter 26. CSE 6331 Algorithms Steve Lai Maximum Flow Reading: CLRS Chapter 26. CSE 6331 Algorithms Steve Lai Flow Network A low network G ( V, E) is a directed graph with a source node sv, a sink node tv, a capacity unction c. Each edge ( u,

More information

Polynomial Precise Interval Analysis Revisited

Polynomial Precise Interval Analysis Revisited Polynomial Precise Interval Analysis Revisited Thomas Gawlitza 1, Jérôme Leroux 2, Jan Reineke 3, Helmut Seidl 1, Grégoire Sutre 2, and Reinhard Wilhelm 3 1 TU München, Institut für Informatik, I2 80333

More information

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden

Selecting Efficient Correlated Equilibria Through Distributed Learning. Jason R. Marden 1 Selecting Efficient Correlated Equilibria Through Distributed Learning Jason R. Marden Abstract A learning rule is completely uncoupled if each player s behavior is conditioned only on his own realized

More information

SEPARATED AND PROPER MORPHISMS

SEPARATED AND PROPER MORPHISMS SEPARATED AND PROPER MORPHISMS BRIAN OSSERMAN The notions o separatedness and properness are the algebraic geometry analogues o the Hausdor condition and compactness in topology. For varieties over the

More information

A Consistent Generation of Pipeline Parallelism and Distribution of Operations and Data among Processors

A Consistent Generation of Pipeline Parallelism and Distribution of Operations and Data among Processors ISSN 0361-7688 Programming and Computer Sotware 006 Vol. 3 No. 3 pp. 166176. Pleiades Publishing Inc. 006. Original Russian Text E.V. Adutskevich N.A. Likhoded 006 published in Programmirovanie 006 Vol.

More information

AH 2700A. Attenuator Pair Ratio for C vs Frequency. Option-E 50 Hz-20 khz Ultra-precision Capacitance/Loss Bridge

AH 2700A. Attenuator Pair Ratio for C vs Frequency. Option-E 50 Hz-20 khz Ultra-precision Capacitance/Loss Bridge 0 E ttenuator Pair Ratio or vs requency NEEN-ERLN 700 Option-E 0-0 k Ultra-precision apacitance/loss ridge ttenuator Ratio Pair Uncertainty o in ppm or ll Usable Pairs o Taps 0 0 0. 0. 0. 07/08/0 E E E

More information

Numerical Solution of Ordinary Differential Equations in Fluctuationlessness Theorem Perspective

Numerical Solution of Ordinary Differential Equations in Fluctuationlessness Theorem Perspective Numerical Solution o Ordinary Dierential Equations in Fluctuationlessness Theorem Perspective NEJLA ALTAY Bahçeşehir University Faculty o Arts and Sciences Beşiktaş, İstanbul TÜRKİYE TURKEY METİN DEMİRALP

More information

The Deutsch-Jozsa Problem: De-quantization and entanglement

The Deutsch-Jozsa Problem: De-quantization and entanglement The Deutsch-Jozsa Problem: De-quantization and entanglement Alastair A. Abbott Department o Computer Science University o Auckland, New Zealand May 31, 009 Abstract The Deustch-Jozsa problem is one o the

More information

The Clifford algebra and the Chevalley map - a computational approach (detailed version 1 ) Darij Grinberg Version 0.6 (3 June 2016). Not proofread!

The Clifford algebra and the Chevalley map - a computational approach (detailed version 1 ) Darij Grinberg Version 0.6 (3 June 2016). Not proofread! The Cliord algebra and the Chevalley map - a computational approach detailed version 1 Darij Grinberg Version 0.6 3 June 2016. Not prooread! 1. Introduction: the Cliord algebra The theory o the Cliord

More information

Limiting Behavior of Markov Chains with Eager Attractors

Limiting Behavior of Markov Chains with Eager Attractors Limiting Behavior of Markov Chains with Eager Attractors Parosh Aziz Abdulla Uppsala University, Sweden. parosh@it.uu.se Noomene Ben Henda Uppsala University, Sweden. Noomene.BenHenda@it.uu.se Sven Sandberg

More information

NONLINEAR CONTROL OF POWER NETWORK MODELS USING FEEDBACK LINEARIZATION

NONLINEAR CONTROL OF POWER NETWORK MODELS USING FEEDBACK LINEARIZATION NONLINEAR CONTROL OF POWER NETWORK MODELS USING FEEDBACK LINEARIZATION Steven Ball Science Applications International Corporation Columbia, MD email: sball@nmtedu Steve Schaer Department o Mathematics

More information

Least-Squares Spectral Analysis Theory Summary

Least-Squares Spectral Analysis Theory Summary Least-Squares Spectral Analysis Theory Summary Reerence: Mtamakaya, J. D. (2012). Assessment o Atmospheric Pressure Loading on the International GNSS REPRO1 Solutions Periodic Signatures. Ph.D. dissertation,

More information

A Survey of Partial-Observation Stochastic Parity Games

A Survey of Partial-Observation Stochastic Parity Games Noname manuscript No. (will be inserted by the editor) A Survey of Partial-Observation Stochastic Parity Games Krishnendu Chatterjee Laurent Doyen Thomas A. Henzinger the date of receipt and acceptance

More information

On a Closed Formula for the Derivatives of e f(x) and Related Financial Applications

On a Closed Formula for the Derivatives of e f(x) and Related Financial Applications International Mathematical Forum, 4, 9, no. 9, 41-47 On a Closed Formula or the Derivatives o e x) and Related Financial Applications Konstantinos Draais 1 UCD CASL, University College Dublin, Ireland

More information

Presenters. Time Domain and Statistical Model Development, Simulation and Correlation Methods for High Speed SerDes

Presenters. Time Domain and Statistical Model Development, Simulation and Correlation Methods for High Speed SerDes JANUARY 28-31, 2013 SANTA CLARA CONVENTION CENTER Time Domain and Statistical Model Development, Simulation and Correlation Methods or High Speed SerDes Presenters Xingdong Dai Fangyi Rao Shiva Prasad

More information

CHAPTER 1: INTRODUCTION. 1.1 Inverse Theory: What It Is and What It Does

CHAPTER 1: INTRODUCTION. 1.1 Inverse Theory: What It Is and What It Does Geosciences 567: CHAPTER (RR/GZ) CHAPTER : INTRODUCTION Inverse Theory: What It Is and What It Does Inverse theory, at least as I choose to deine it, is the ine art o estimating model parameters rom data

More information

Percentile Policies for Inventory Problems with Partially Observed Markovian Demands

Percentile Policies for Inventory Problems with Partially Observed Markovian Demands Proceedings o the International Conerence on Industrial Engineering and Operations Management Percentile Policies or Inventory Problems with Partially Observed Markovian Demands Farzaneh Mansouriard Department

More information

A PROBABILISTIC POWER DOMAIN ALGORITHM FOR FRACTAL IMAGE DECODING

A PROBABILISTIC POWER DOMAIN ALGORITHM FOR FRACTAL IMAGE DECODING Stochastics and Dynamics, Vol. 2, o. 2 (2002) 6 73 c World Scientiic Publishing Company A PROBABILISTIC POWER DOMAI ALGORITHM FOR FRACTAL IMAGE DECODIG V. DRAKOPOULOS Department o Inormatics and Telecommunications,

More information

Exact and Approximate Equilibria for Optimal Group Network Formation

Exact and Approximate Equilibria for Optimal Group Network Formation Exact and Approximate Equilibria for Optimal Group Network Formation Elliot Anshelevich and Bugra Caskurlu Computer Science Department, RPI, 110 8th Street, Troy, NY 12180 {eanshel,caskub}@cs.rpi.edu Abstract.

More information

Physics 5153 Classical Mechanics. Solution by Quadrature-1

Physics 5153 Classical Mechanics. Solution by Quadrature-1 October 14, 003 11:47:49 1 Introduction Physics 5153 Classical Mechanics Solution by Quadrature In the previous lectures, we have reduced the number o eective degrees o reedom that are needed to solve

More information

On Picard value problem of some difference polynomials

On Picard value problem of some difference polynomials Arab J Math 018 7:7 37 https://doiorg/101007/s40065-017-0189-x Arabian Journal o Mathematics Zinelâabidine Latreuch Benharrat Belaïdi On Picard value problem o some dierence polynomials Received: 4 April

More information

Recursive Stochastic Games with Positive Rewards

Recursive Stochastic Games with Positive Rewards Recursive Stochastic Games with Positive Rewards K. Etessami 1, D. Wojtczak 1, and M. Yannakakis 2 1 LFCS, School of Informatics, University of Edinburgh 2 Dept. of Computer Science, Columbia University

More information

CS 361 Meeting 28 11/14/18

CS 361 Meeting 28 11/14/18 CS 361 Meeting 28 11/14/18 Announcements 1. Homework 9 due Friday Computation Histories 1. Some very interesting proos o undecidability rely on the technique o constructing a language that describes the

More information

Towards a Flowchart Diagrammatic Language for Monad-based Semantics

Towards a Flowchart Diagrammatic Language for Monad-based Semantics Towards a Flowchart Diagrammatic Language or Monad-based Semantics Julian Jakob Friedrich-Alexander-Universität Erlangen-Nürnberg julian.jakob@au.de 21.06.2016 Introductory Examples 1 2 + 3 3 9 36 4 while

More information

9.3 Graphing Functions by Plotting Points, The Domain and Range of Functions

9.3 Graphing Functions by Plotting Points, The Domain and Range of Functions 9. Graphing Functions by Plotting Points, The Domain and Range o Functions Now that we have a basic idea o what unctions are and how to deal with them, we would like to start talking about the graph o

More information

An Extension of Newton s Method to ω-continuous Semirings

An Extension of Newton s Method to ω-continuous Semirings An Extension of Newton s Method to ω-continuous Semirings Javier Esparza, Stefan Kiefer, and Michael Luttenberger Institute for Formal Methods in Computer Science Universität Stuttgart, Germany {esparza,kiefersn,luttenml}@informatik.uni-stuttgart.de

More information

Received: 30 July 2017; Accepted: 29 September 2017; Published: 8 October 2017

Received: 30 July 2017; Accepted: 29 September 2017; Published: 8 October 2017 mathematics Article Least-Squares Solution o Linear Dierential Equations Daniele Mortari ID Aerospace Engineering, Texas A&M University, College Station, TX 77843, USA; mortari@tamu.edu; Tel.: +1-979-845-734

More information

SOME CHARACTERIZATIONS OF HARMONIC CONVEX FUNCTIONS

SOME CHARACTERIZATIONS OF HARMONIC CONVEX FUNCTIONS International Journal o Analysis and Applications ISSN 2291-8639 Volume 15, Number 2 2017, 179-187 DOI: 10.28924/2291-8639-15-2017-179 SOME CHARACTERIZATIONS OF HARMONIC CONVEX FUNCTIONS MUHAMMAD ASLAM

More information

Secure Communication in Multicast Graphs

Secure Communication in Multicast Graphs Secure Communication in Multicast Graphs Qiushi Yang and Yvo Desmedt Department o Computer Science, University College London, UK {q.yang, y.desmedt}@cs.ucl.ac.uk Abstract. In this paper we solve the problem

More information

9.1 The Square Root Function

9.1 The Square Root Function Section 9.1 The Square Root Function 869 9.1 The Square Root Function In this section we turn our attention to the square root unction, the unction deined b the equation () =. (1) We begin the section

More information

The Number of Fields Generated by the Square Root of Values of a Given Polynomial

The Number of Fields Generated by the Square Root of Values of a Given Polynomial Canad. Math. Bull. Vol. 46 (1), 2003 pp. 71 79 The Number o Fields Generated by the Square Root o Values o a Given Polynomial Pamela Cutter, Andrew Granville, and Thomas J. Tucker Abstract. The abc-conjecture

More information

APPENDIX 1 ERROR ESTIMATION

APPENDIX 1 ERROR ESTIMATION 1 APPENDIX 1 ERROR ESTIMATION Measurements are always subject to some uncertainties no matter how modern and expensive equipment is used or how careully the measurements are perormed These uncertainties

More information

: Cryptography and Game Theory Ran Canetti and Alon Rosen. Lecture 8

: Cryptography and Game Theory Ran Canetti and Alon Rosen. Lecture 8 0368.4170: Cryptography and Game Theory Ran Canetti and Alon Rosen Lecture 8 December 9, 2009 Scribe: Naama Ben-Aroya Last Week 2 player zero-sum games (min-max) Mixed NE (existence, complexity) ɛ-ne Correlated

More information

SIO 211B, Rudnick. We start with a definition of the Fourier transform! ĝ f of a time series! ( )

SIO 211B, Rudnick. We start with a definition of the Fourier transform! ĝ f of a time series! ( ) SIO B, Rudnick! XVIII.Wavelets The goal o a wavelet transorm is a description o a time series that is both requency and time selective. The wavelet transorm can be contrasted with the well-known and very

More information

Review D: Potential Energy and the Conservation of Mechanical Energy

Review D: Potential Energy and the Conservation of Mechanical Energy MSSCHUSETTS INSTITUTE OF TECHNOLOGY Department o Physics 8. Spring 4 Review D: Potential Energy and the Conservation o Mechanical Energy D.1 Conservative and Non-conservative Force... D.1.1 Introduction...

More information

Analysis of the regularity, pointwise completeness and pointwise generacy of descriptor linear electrical circuits

Analysis of the regularity, pointwise completeness and pointwise generacy of descriptor linear electrical circuits Computer Applications in Electrical Engineering Vol. 4 Analysis o the regularity pointwise completeness pointwise generacy o descriptor linear electrical circuits Tadeusz Kaczorek Białystok University

More information

The First-Order Theory of Ordering Constraints over Feature Trees

The First-Order Theory of Ordering Constraints over Feature Trees Discrete Mathematics and Theoretical Computer Science 4, 2001, 193 234 The First-Order Theory o Ordering Constraints over Feature Trees Martin Müller 1 and Joachim Niehren 1 and Ral Treinen 2 1 Programming

More information

Lab on Taylor Polynomials. This Lab is accompanied by an Answer Sheet that you are to complete and turn in to your instructor.

Lab on Taylor Polynomials. This Lab is accompanied by an Answer Sheet that you are to complete and turn in to your instructor. Lab on Taylor Polynomials This Lab is accompanied by an Answer Sheet that you are to complete and turn in to your instructor. In this Lab we will approimate complicated unctions by simple unctions. The

More information

Products and Convolutions of Gaussian Probability Density Functions

Products and Convolutions of Gaussian Probability Density Functions Tina Memo No. 003-003 Internal Report Products and Convolutions o Gaussian Probability Density Functions P.A. Bromiley Last updated / 9 / 03 Imaging Science and Biomedical Engineering Division, Medical

More information