Seminarberichte Mathematik

Size: px
Start display at page:

Download "Seminarberichte Mathematik"

Transcription

1 Seminarberichte Mathematik Band Herausgegeben von den Dozentinnen und Dozenten der Mathematik

2 Seminarberichte aus der FAKULTÄT für Mathematik und Informatik der FernUniversität in Hagen Ë Ñ Ò Ö Ö Ø Ö Ò Ò Ò ÙÒÖ ÐÑ Ò ØÒ Òº Ë ÒØ ÐØ Ò Ö Ø ÚÓÒ ÎÓÖØÖ Ò Ì ÙÒ Ò ÙÒ Ë Ñ Ò Ö Ò Ù ÑÑ Ò ÙÒ Ò ÙÒ ÈÖ ÔÖ ÒØ º Ø Ö Ò Ö À Ö Ù Ö Ò ÈÖÓ º Öº º ÙÑ Å ÒÙ Ö ÔØ Ò Ñ Ò Ù Ö Ñ ÒØ Ò ÈÖÓ º Öº Ϻ ÀÓ ØØØÐ Ö ÈÖÓ º Öº Ϻ Ã Ö ÈÖÓ º Öº ̺ Ä Ò ÈÖÓ º Öº Ϻ ËÔ ØÞ Ö ÈÖÓ º Öº ĺ ÍÒ Ö Ä Ö Ø Ö Ø Å Ø Ñ Ø Ùº ÇÔØ Ñ ÖÙÒ Ä Ö Ø ËØÓ Ø Ä Ö Ø ÆÙÑ Ö Å Ø Ñ Ø Ä Ö Ø Ò Û Ò Ø ËØÓ Ø Ä Ö Ø Ð Ö ÈÖÓ º Öº Ϻ Ñ ÒÒ Ñºµ ÈÖÓ º Öº º ÓÓ ÈÖÓ º Öº ú Àº à ÑÔ ÈÖÓ º Öº Àº Ä Ò Ò ÈÖÓ º Öº º ÄÓ Ö Ñºµ ÈÖÓ º Öº Ǻ ÅÓ Ð Ò Ñºµ ÈÖÓ º Öº Àº È Ø Ö ÓÒ Ñºµ ÈÖÓ º Öº º ÈÙÑÔÐ Ò Ñºµ ÈÖÓ º Öº ú ΠРѺµ Ôк ÈÖÓ º Öº ʺ Ö Ö Ôк ÈÖÓ º Öº ú¹ º ÖÓ ¹ Ö Ñ ÒÒ ÈÖÓ º Öº ź ÐØ Ò ÈÖÓ º Öº º Ä ÒÞ Ôк ÈÖÓ º Öº Àº Å Ø Ö Ôк ÈÖÓ º Öº ź Ë ÖÞ Ô FAKULTÄT für Mathematik und Informatik, FernUniversität, Hagen Federal Republic of Germany ISSN FernUniversität in Hagen 2014

3 Seminarberichte aus der FAKULTÄT für Mathematik und Informatik der FernUniversität in Hagen À Ö Ù Ò ÚÓÒ Ò ÓÞ ÒØ ÒÒ Ò ÙÒ ÓÞ ÒØ Ò Ö Å Ø Ñ Ø Ò ¾¼½

4

5 Seminarberichte aus der FAKULTÄT für Mathematik und Informatik der FernUniversität in Hagen ÁÒ ÐØ Ò ¾¼½ ͺ Ð º ÙÑ ÁÒØ Ö Ø ÓÒ ÔÖÓ Ð Ø ÓÖ Ö Ò ÓÑ ÓÒÚ Ü Ó Ò Ð ØØ Ó Ô Ö ÐÐ ÐÓ Ö Ñ º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º ½ º ÐÐ ÓÑÔÙØ Ò Ø Ô ÖØ Ð Ð Ø Ò Ó Å Ó³ ÒÓÑ Ð Ö Ð Ø ÓÒ º º º º º º º º º ½ º ÖÝ Ó Ïº Ã Ö Ìº Å Ð Ò ÖÙ º Ê Ö Ò ÈÖÓ Ð Ñ Ö ÓÒÚ Ü Ò ÇÔØ Ñ ÖÙÒ Ñ Ø Ê Ð Ú ÒÞ º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º ÖÝ Ó Ïº Ã Ö Ìº Å Ð Ò ÖÙ ÇÒ Ø Ø ÖÑ Ð ÚÓÐØ Ò Ð Ò Ú ÖØÙ Ð Ò ÒÓÓÒ ÙØÓÖ º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º ½ ʺ ØØ Ö Ö Ø ËØÓ Ø ÖÓÛØ ÈÖÓ Ý ÓÒÕÙ Ö Ò ÓÙÒ Ö º º º È ÐÐ º ÈÙÑÔÐ Ò ËÔ Ó Ä Ô ØÞ ÙÒØ ÓÒ ÓÒ Ñ ØÖ ËÔ º º º º º ½ ͺ Ð º ÙÑ ÁÒØ Ö Ø ÓÒ ÔÖÓ Ð Ø ÓÖ Ö Ò ÓÑ ÓÒÚ Ü Ó Ò Ð ØØ Ó Ô Ö ÐÐ ÐÓ Ö Ñ ÁÁµº º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º ½¼½ º ÖÝ Ó Ïº Ã Ö Ìº Å Ð Ò ÖÙ ËÔÓÒØ Ò ÓÙ ÑÔÐ Ø ÓÒ Ó Ø ÖÑ Ð ÒÓ Ý Ð Ó ÒØ º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º ½¼ º ÖÝ Ó Ïº Ã Ö Ìº Å Ð Ò ÖÙ ÇÒ Ø Ø ÖÑ Ð Ò ÙÐ Ö ÑÓÑ ÒØÙÑ Ó Ø Ð ØÖÓÒ º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º ½½ º ÖÝ Ó Ïº Ã Ö Ìº Å Ð Ò ÖÙ ÇÒ ØÛÓ Ø Ñ Ø Ó Ø Ö Ú Ø Ø ÓÒ Ð ÓÒ Ø ÒØ º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º ½¾ º ÖÝ Ó Ïº Ã Ö Ìº Å Ð Ò ÖÙ ÇÒ ÕÙ ÒØÙÑ Ý Ø Ñ ÜÔÓ ØÓ Ø Ø º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º º ½

6

7 - 1 - Intersection probabilities for random convex bodies and lattices of parallelograms Uwe Bäsel Andrei Duma Abstract In the first part of the paper we calculate the probabilities that a plane convex body intersects 1, 2, 3 or 4 parallelograms of a lattice of congruent parallelograms generated by two given families of equidistant lines in the plane. In the second part we consider the events that the convex body intersects the first family of lines (event A) and the second family (event B). We give an extension of results concerning the values of the angle α between the lines of the two families, such that A and B are independent, and calculate these values for any rectangle and any ellipse. AMS Classification: 60D05, 52A22 Keywords: random convex sets, intersection/hitting probabilities, lattice of parallelograms, Buffon problem, regular polygons, sets of constant width, orbiforms, Reuleaux polygons, Reuleaux triangle, independence of events 1 Introduction We consider the random throw of a plane convex body C onto a plane ruled with two families R a and R b of parallel lines, R a := {(x, y) R 2 x sin α y cos α = ka, k Z}, R b := {(x, y) R 2 y = mb, m Z}, where a and b are positive real constants, α R, 0 < α π/2, and put R a, b, α := R a R b. R a, b, α is a lattice of parallelograms that are congruent to the parallelogram F := {(x, y) R 2 0 y b, y cot α x a csc α + y cot α}. The random throw C of onto R a, b, α is defined as follows: The coordinates x and y of a fixed point of C are random variables uniformly distributed in [y cot α, a csc α + y cot α and [0, b, resp.; the angle φ between the x-axis and a fixed direction of C is a random variable uniformly distributed in [0, 2π. All three random variables are stochastically independent. Let u denote the

8 - 2 - perimeter of C, F the area of C, and w(φ) the width of C in the direction φ. We assume max w(φ) min(a, b) ; (1) 0 φ<π in this case the probability that C intersects two lines of R a (or R b ) at the same time is equal to zero. A denotes the event that C intersects R a, and B the event that C intersects R b. Laplace [10, pp found the intersection probability for a line segment (needle) of length l min(a, b) and R a, b, π/2. Santaló [14, pp. 166/167 (see also [15, p. 139) calculated the probabities of 0, 1 and 2 intersections between such a needle and R a, b, α, and Schuster [16 proved that the events A and B are independent if α = o. Stoka [17, pp. 43/44 showed that A and B are independent for every α if C is a circle. Duma and Stoka [8, pp. 971/972 found the probability that an ellipse intersects R a, b, π/2. Ren and Zhang [12, p. 320 derived a general formula for the probability that C has exactly i intersections with R a and at the same time j intersections with R b (without the restriction of (1)). Ren and Zhang [12, p. 325 and Aleman et al. [1 proved that for every C there is an nonvanishing value of α, for which the events A and B are independent. Explicit calculations of intersection probabilities and independence angles α for regular n-gons (n 2) were carried out by Bäsel [4. C intersects at least one and at most four parallelograms of R a, b, α. Our first aim is to calculate the probabilities p(i) that C intersects exactly i parallelograms of R a, b, α. 2 Intersection probabilities Theorem 1. The probabilities p(i) that C intersects exactly i parallelograms of R a, b, α are given by p(1) = 1 (a + b)u πab p(3) = P (A B) F sin α ab (a + b)u πab, p(4) = F sin α ab + P (A B), p(2) = 2P (A B), with P (A B) = 1 π w(φ)w(φ + α) dφ, πab 0 and the expectation for the random number Z of intersected parallelograms by (a + b)u E(Z) = F sin α. πab ab 2

9 - 3 - Fig. 1: Situation for fixed value of the angle φ Fig. 2: Rearrangement Proof. We choose a fixed reference point O and a fixed line segment σ (starting from O) inside the convex body C (Fig. 1). φ is the angle between the direction perpendicular to the lines of R a and σ. For fixed value of φ, C intersects exactly i {1, 2, 3, 4} parallelograms of R a, b, α if O is inside the set with number i. Hence the conditional probability of exactly i intersections for fixed φ is given by p(i φ) = F i(φ) sin α, ab where F i (φ) denotes the area of the set i. With the density function of the random variable φ f(φ) = 1 π if 0 φ π, 0 else, 3

10 - 4 - we get the (total) probability of exactly i intersected parallelograms p(i) = π 0 p(i φ) f(φ) dφ = sin α πab π 0 F i (φ) dφ. By cutting the parallelogram along the lines g 1 and g 2 and rearranging the four parts in a suitable way one gets the parallelogram of Fig. 2. With the help of this figure it is easy to find the areas F i (φ). For every angle φ the area F 4 (φ) of the set number 4 is equal to F (cf. [12, p. 321) since this set is a congruent copy of C (with reference point O and angle φ+π); furthermore, (a w(φ))(b w(φ + α)) F 1 (φ) =, sin α (a w(φ))w(φ + α) + w(φ)(b w(φ + α)) F 2 (φ) =, sin α w(φ)w(φ + α) F 3 (φ) = F. sin α Taking into account that we find π 0 w(φ) dφ = π 0 w(φ + α) dφ = u, π (a + b)u p(1) = w(φ)w(φ + α) dφ, πab πab 0 (a + b)u p(2) = 2 π w(φ)w(φ + α) dφ, πab πab 0 p(3) = 1 π w(φ)w(φ + α) dφ F sin α, πab 0 ab p(4) = F sin α. ab For fixed value of φ, the event A B occurs (C intersects R a and R b at the same time) if O is in the union of the sets with numbers 3 and 4. Hence the conditional probability P (A B φ) is given by So we have P (A B φ) = P (A B) = π 0 w(φ)w(φ + α)/ sin α ab/ sin α = w(φ)w(φ + α) ab P (A B φ) f(φ) dφ = 1 π w(φ)w(φ + α) dφ. πab 0 (This is the formula in [12, p. 320, Theorem 1 with i = j = 1.) For the expectation we get E(Z) = 4 i p(i) = 1 + i=1 (a + b)u πab + F sin α ab.. 4

11 - 5 - Remark. Note that p(1) is the probability that C intersects no line of the lattice R a, b, α. It follows that (a + b)u P (A B) = 1 p(1) = 1 π w(φ)w(φ + α) dφ πab πab 0 which is the result of Theorem 1 in [1, pp. 302/303. p(2) is the probability that C intersects exactly one line, and the sum p(3) + p(4) the probability that C intersects exactly two lines. The expectation for the random number Y of intersected lines is E(Y ) = 1 p(2) + 2 (p(3) + p(4)) = Remark. For b we find (a + b)u πab. P (Ā) = p(1) = 1 u πa and P (A) = p(2) = u πa. This is the result of Barbier [2 (cf. [1, p. 303). Furthermore, ( E(Z) = 1 1 u ) u + 2 πa πa = 1 + u πa. 3 Some special cases 3.1 Rectangles Let s and t denote the side lengths of the rectangle. direction φ is given by The width in the w(φ) = s cos(φ) + t sin(φ). w is a π-periodic function with the following restriction: s cos φ + t sin φ if 0 φ < π/2, w(φ) 0 φ 3π/2 = s cos φ + t sin φ if π/2 φ < π, s cos φ t sin φ if π φ 3π/2. For the calculation of P (A B) = 1 πab we have to distinguish the cases and π 0 w(φ)w(φ + α) dφ 0 φ < π/2, π/2 φ π, α φ + α < π/2, π/2 φ + α < π, π φ + α π + α. 5

12 - 6 - Since 0 < α π/2, this yields 0 φ < π/2 α, π/2 α φ < π/2, π/2 φ < π α, π α φ π ; therefore ( P (A B) = 1 π/2 α πab 0 π/2 + + π/2 α π α π/2 ) π + w(φ)w(φ + α) dφ π α and = [ (π 2α) ( s 2 + t 2) + 4st cos α + 2 ( s 2 + t 2 + 2αst ) sin α 2πab 2(a + b)(s + t) p(1) = 1 + P (A B), πab 2(a + b)(s + t) p(2) = 2P (A B), πab st sin α st sin α p(3) = P (A B), p(4) = ab ab. 3.2 Regular polygons Let l denote the radius of the circumscribed circle of the regular polygon. One finds F = 1 2 nl2 sin 2π n, u = 2nl sin π n and 2(a + b)nl p(1) = 1 sin π + P (A B), πab n 2(a + b)nl p(2) = sin π 2P (A B), πab n p(3) = P (A B) nl2 2ab sin 2π n sin α, nl2 p(4) = 2ab sin 2π n sin α. The formulas for P (A B) are already known [4, pp. 248/249: [ ( P (A B) = nl2 2π sin(α δ) + sin πab ( ) 2π + (α δ) cos n α + δ ) n α + δ + ( 2π n α + δ ) cos(α δ) with δ = nα 2π 2π n 6

13 - 7 - for even n, and P (A B) = 2nl2 πab cos2 π 2n + (α δ) cos [ ( π ) sin (α δ) + sin n α + δ ( π ) ( π ) n α + δ + n α + δ cos(α δ) with nα δ = π π n for odd n, where is the integer part of. 3.3 Ellipses Let s and t denote the lengths of the major axis and minor axis respectively. We have F = πst/4 and ( ) t 2 w(φ) = s 2 cos 2 φ + t 2 sin 2 φ = s 1 µ 2 sin 2 φ, µ 2 = 1, s hence where u = 2 E(µ) = π/2 0 π/2 0 w(φ) dφ = 2sE(µ), 1 µ 2 sin 2 φ dφ is the complete elliptic integral of the second kind, and hence with 2(a + b)se(µ) p(1) = 1 + P (A B), πab 2(a + b)se(µ) p(2) = 2P (A B), πab πst sin α πst sin α p(3) = P (A B), p(4) = 4ab 4ab π P (A B) = s2 (1 µ πab 2 sin 2 φ)(1 µ 2 sin 2 (φ + α)) dφ. 0 Furthermore, we find 2(a + b)se(µ) P (A B) = πab s2 πab π 0 (1 µ 2 sin 2 φ)(1 µ 2 sin 2 (φ + α)) dφ ; that is the result of [1, p. 304 which extends the result for the case α = π/2 [8, pp. 971/972. 7

14 Orbiforms and (regular) Reuleaux polygons An orbiform is a plane convex body of constant width d. All orbiforms of width d have perimeter π π u = w(φ) dφ = d dφ = πd. 0 0 A Reuleaux polygon is an orbiform whose boundary consists of a finite number n of circular arcs (sides) of radius d whose centre points are the vertices of the polygon. Reuleaux polygons necessarily have an odd number of vertices (and sides). [6, pp. 130/131 If all of its sides are of equal length, a Reuleaux polygon is called regular [9, p For all orbiforms of width d we have P (A B) = d 2 /(ab), hence and p(1) = 1 (a + b)d ab + d2 ab, (a + b)d p(2) = 2d2 ab ab, p(3) = d2 ab F sin α, p(4) = F sin α ab ab E(Z) = 1 + (a + b)d ab + F sin α ab Lebesgue [11 and Blaschke [5 found that, among all orbiforms of given width, the Reuleaux triangle (which is regular) has the least area. (Of course, the circle has the greatest area.) For a n-sided regular Reuleaux polygon we find F = d2 2 ( π n tan π ) 2n and hence ( π 3 tan π ) d 2 ( 6 2 = π ) d F πd2 4, ( ) π 3 / , π/ , for all orbiforms of width d. Note that [π n tan(π/2n) /2 is a strictly increasing function of n (cf. [9, p. 824). 3.5 Special cases of special cases Needle As special case of a rectangle or an ellipse with t = 0 one gets the result for a needle of length s with u = 2s, F = 0 and P (A B) = s2 [(π 2α) cos α + 2 sin α 2πab.. 8

15 - 9 - From the regular polygons one gets the same result for n = 2 and s = 2l. Therefore, we have 2(a + b)s p(1) = 1 πab 2(a + b)s p(2) = πab + s2 [(π 2α) cos α + 2 sin α 2πab s2 [(π 2α) cos α + 2 sin α πab p(3) = s2 [(π 2α) cos α + 2 sin α 2πab, p(4) = 0. This is the result of Santaló [14, p. 166/167, [15, p. 139, also derived as special case n = 2 of the intersection probabilities for a star of needles (= n lines segments of equal length l with common endpoint and constant angular spacing 2π/n) and R a, b, α [3, p. 47. The expectation is given by In the case α = π/2 we get E(Z) = 1 + P (A B) = 1 p(1) = 2(a + b)s πab. 2(a + b)s s2 πab which is the result of Laplace published first in 1812 [10, pp , cf. [7, pp. 86/87, For b we find 2s p(1) = P (Ā) = 1 πa and p(2) = P (A) = 2s πa ; that is the result of the Buffon needle problem (see e. g. [7, pp. 84/85) Circle The circle with diameter d is an orbiform with F = πd 2 /4, hence p(1) = 1 (a + b)d ab + d2 ab, (a + b)d p(2) = 2d2 ab ab p(3) = d2 ab πd2 sin α, p(4) = πd2 sin α 4ab 4ab and (a + b)d E(Z) = πd2 sin α. ab 4ab The circle is also a special case of the regular Reuleaux polygons since lim F = πd2 n 2 d2 2 lim n tan π n 2n = πd2 2 πd2 4 = πd2 4, of the ellipses for d = s = t and of the regular polygons for n and d = 2l. 9,,

16 Independence 4.1 General results Now we are asking for the conditions of the independence of A and B, that is P (A B) = P (A) P (B) and hence π 0 w(φ)w(φ + α) dφ = 1 π ( π 0 ) 2 w(φ) dφ = u2 π. (2) Theorem 2. 1) There exists at least one angle α (0, π such that the events A and B are independent. 2) A and B are independent for any angle α if C is an orbiform. 3) If A and B are independent for every α 0, then C is an orbiform. Proof. 1) See [12, p. 325, [13, p. 77 (using mean value theorem for integration) and [1, pp (using Fourier series). 2) For an orbiform of width d we get π 0 π w(φ)w(φ + α) dφ = d 2 dφ = πd 2. Due to Barbier s theorem [2, pp , [18, p. 17, we have d = u/π. The result follows. 3) In the case α = 0, Eq. (2) can be written as Clearly, 1 π ( 1 π 2 ( u ) 2 w 2 (φ) dφ = w(φ) dφ) =. π 0 π 0 π E(w) = 1 π π 0 w(φ) dφ and E(w 2 ) = 1 π 0 π 0 w 2 (φ) dφ are the first moment (expectation) and the second moment of w respectively. So we find for the variance of w Var(w) = E [ (w E(w)) 2 = 1 π = E(w 2 ) (E(w)) 2 = 0. π 0 [w(φ) E(w(φ)) 2 dφ It follows that w(φ) = E(w(φ)) = u π = const. Remark. Different proofs of Theorem 2, parts 2 and 3 can be found in [1, pp

17 Corollary 1. Let π/n, n N, be the smallest period of w. Then there are at least two values of α in every interval ( ) kπ (k + 1)π,, k = 0,..., n 1, n n such that the events A and B are independent. A and B are independent for α = kπ/n, k = 0,..., n, if and only if w is a constant function. Proof. w is a π-periodic function and hence has a smallest period π/n, n N. Since w is a continuous function, with Theorem 2 the results follow. The problem to characterize the convex bodies C, such that the angle α, for which A and B are independent, is unique, was proposed in [1, p Results for regular polygons can be found in [4, pp In the following we solve the problem for the cases that C is a rectangle or an ellipse. 4.2 Rectangles In the case of independence Eq. 2 becomes [ (π 2α) ( x ) + 4x cos α + 2 ( x 2 + 2αx + 1 ) sin α = 8 π (x + 1)2 (3) with x := t/s (cf. [12, p. 326). This is a transcendental equation for the angles α for which the events A and B are independent. We assume t s, hence 0 x 1. Numerical solutions were calculated with Mathematica using the function FindRoot, see the Fig. 3. For x = 0 the calculation yields o which is Schuster s result [16. One observes that for 0 x < x 0 with x there is a unique angle of independence, for x 0 x 1 there are two angles of independence. In order to calculate the exact value of x 0 we put α = π/2 in (3) and get It follows that x 0 = 1 2 π π 1 4 x 2 π2 8 4 π x + 1 = 0. ( π 2 ) (cf. [12, p. 326). 4 π For a square, x = 1, we have α o and α o = 90 o α 1 ; these angles can also be found as special case of the result for regular polygons, see [4, p

18 α x Fig. 3: Rectangles: Angles α of independence as function of x = t/s 45.0 α x Fig. 4: Ellipses: Angle α of independence as function of x = t/s 12

19 Ellipses Eq. 2 becomes π 0 (1 µ 2 sin 2 φ)(1 µ 2 sin 2 (φ + α))dφ = 4 π ( π/2 0 ) 2 1 µ 2 sin 2 φ dφ with µ 2 = 1 x 2 and x := t/s, 0 x < 1. (In the case x = 1 we have a circle and A and B are independent for every α.) Using Mathematica with numerical solutions for both integrals (NIntegrate) and FindRoot, we get Fig. 4. The angle α of independence is always uniquely determined (except for x = 1) and for x = 0 we again find Schuster s result. References [1 A. Aleman, M. Stoka, T. Zamfirescu: Convex bodies instead of needles in Buffon s experiment, Geometriae Dedicata 67 (1997), [2 J.-É. Barbier: Note sur le problème de l aiguille et le jeu du joint couvert, Journal des mathématiques pures et appliquées, 2d ser., 5 (1860), [3 U. Bäsel: Geometrische Wahrscheinlichkeiten für Nadelsterne und Parallelogrammgitter, Fernuniversität Hagen: Seminarberichte aus der Fakultät für Mathematik und Informatik 83 (2010), [4 U. Bäsel: Buffon s problem with regular polygons, Beitr. Algebra Geom. 53 No. 1 (2012), [5 W. Blaschke: Konvexe Bereiche gegebenener konstanter Breite und kleinsten Inhalts, Math. Annalen 76 (1915), [6 T. Bonnesen, W. Fenchel: Theorie der konvexen Körper, Ergeb. Math., Springer, Berlin, [7 E. Czuber: Geometrische Wahrscheinlichkeiten und Mittelwerte, B. G. Teubner, Leipzig, [8 A. Duma, M. Stoka: Hitting probabilities for random ellipses and ellipsoids, J. Appl. Prob. 30 (1993), [9 W. J. Firey: Isoperimetric ratios of Reuleaux polygons, Pac. J. Math. 10 (1960), [10 P.-S. Laplace: Théorie analytique des probabilités, Mme Ve Courcier, Paris,

20 [11 H. Lebesgue: Sur le problème des isopérimètres et sur les domaines de largeur constante, Bull. Soc. Math. France C. R. (1914), [12 D. Ren, G. Zhang: Random convex sets in a lattice of parallelograms, Acta Math. Sci. 11 (1991), [13 D. Ren: Topics in Integral Geometry, World Scientific, Singapore, New Jersey, London, Hong Kong, [14 L A. Santaló: Sur quelques problèmes de probabilités géométriques, Tôhoku Math. J. 47 (1940), [15 L. A. Santaló: Integral Geometry and Geometric Probability, Addison- Wesley, London, [16 E. F. Schuster: Buffon s needle experiment, Am. Math. Mon. 81 (1974), [17 M. Stoka: Sur quelques problèmes de probabilités géométriques pour des réseaux dans l espace euclidean E n, Publ. Inst. Stat. Univ. Paris 34 No. 3 (1989), [18 K. Voss: Integralgeometrie für Stereologie und Bildrekonstruktion, Springer-Verlag, Berlin, Heidelberg, Uwe BÄSEL Andrei DUMA HTWK Leipzig FernUniversität in Hagen Fakultät für Maschinenbau Fakultät für Mathematik und Energietechnik, und Informatik Leipzig, Germany Hagen, Germany uwe.baesel@htwk-leipzig.de Mathe.Duma@FernUni-Hagen.de 14 Eingegangen am:

21 Computing the partial liftings of Machado s binomial relations Gioia Failla DIMET, University of Reggio Calabria gioia.failla@unirc.it Abstract In this note we give the complete proof of all first and second partial liftings of the Machado s relations described in the paper [6, where we investigated the problem to find the defining equations of the Hankel algebraic variety H(2, n) P N. Introduction The Hankel variety H(r, n) of Hankel r-planes of P n, as a subvariety of the Grassmann variety G(r, n) was introduced first by Giuffrida and Maggioni ([7). Later we described its Singular locus ([4,[5) and we fixed the attention to determine the defining equations of H(r, n). In fact for the Grassmann variety of r-planes of P n the defining equations are known, nevertheless not explicitly written ([1). For r = 1 the defining equations of H(1, n) were found by Conca, Herzog and Valla using Sagbi bases theory ([3). In [6 we described an algorithm, based on Sagbi basis theory, to determine the relations of H(r, n) by lifting the binomial relations of the toric deformation of H(r, n) found by Machado in [8. In this note we give a detailed proof of all partial liftings for H(2, n) described in [6. Some of the results of this note have been conjectured by using the software CoCoA ([2). 1 Preliminaries Let R be a commutative ring. A matrix of the of the form x 1 x 2 x n x 2 x 3 x n x n+1 H r,n = x r 1 x r x n+r 1 x r x r+1 x n+r 1 x n+r with coefficients in R is called Hankel matrix. In this paper we consider generic Hankel matrices H r,n, in other words, Hankel matrices whose entries are indeterminates. Let K be a field and S = K[x 1, x 2,..., x n+r the polynomial ring over K in n + r indeterminates. We denote by [i 1 i 2... i r the r-minor with columns i 1 < i 2 <... i r. Let < be the lexicographical order induced by x 1 > x 2 >... > x n+r. Then in < [i 1 i 2... i r = x i1 x i x ir +r 1. 1

22 Notice that x i1 x i x ir +r 1 is the product of monomials corresponding to the main diagonal of the minor [i 1 i 2... i r. Let K-algebra A 2,n over K generated by the initial monomials x i1 x i2 +1x i3 +2 with 1 i 1 < i 2 < i 3 n of the 3 minors of H 3,n. Let T = K[y i1 i 2 i 3 : 1 i 1 < i 2 < i 3 n be the polynomial ring in the variables y i1 i 2 i 3 and let ψ : T A 2,n be the K-algebra homomorphism with y i1 i 2 i 3 x i1 x i2 +1x i3 +2. Each monomial of degree d in T can be identified with a d 3 matrix i 11 i 12 i 13 i 21 i 22 i i d1 i d2 i d3 such that (i 11 i 12 i 13 ) (i 21 i 22 i 23 ) (i d1 i d2 i d3 ) in the lexicographical order. particular a monomial of degree two in d corresponds to a matrix of the form ( a b ) c d e f with a < b < c, d < e < f and (a, b, c) (d, e, f). The kernel J = ker ψ has been determined by Machado([8), even for generalized Hankel matrices of arbitrary size. In our case J is generated by the following type of relations ( ) ( ) a b c a e c with e < b, c f, d e f d b f ( ) ( ) a b c a e f with e < b, f < c, d e f d b c ( ) ( ) a b c a b f with b e, f < c, d e f d e c and assuming that a d, b e, c f one has ( ) ( ) a b c a d 1 c with b << d, e c 1, d 1 < c d e f b + 1 e f ( ) ( ) a b c a b e 1 with d b 1, c << e, c + 1 < f d e f d c + 1 f ( ) ( ) a b c a d 1 e 1 with b << d, c << e. d e f b + 1 c + 1 f Here we set i << j if j i 2. In In the following theorem we give a criterion for the existence of a Sagbi basis which is a variation of the known criterion ([9). The proof is contained in [6. Theorem 1.1. Let T = K[y 1,..., y m be the polynomial ring over K in the variables y 1,..., y m, and let φ : T A the K-algebra homomorphism with y i a i and ψ : T in < (A) the K-algebra homomorphism with y i in < (a i ) for i = 1,..., m. Let I = Kerφ and f 1,..., f r be a set of binomial generators of J = Kerψ. Then the following conditions are equivalent: 2

23 (a) a 1,..., a m is a Sagbi basis of A. (b) For each j, there exist monomials m 1,..., m s T and c 1,..., c s K such that (i) f j + s i c im i I. (ii) in < (φ(m i+1 )) = in < (φ(f j + i k=1 c km k )) < in < (φ(m i )), in < (φ(f j + c 1 m 1 )) < in < (φ(f j )). If the equivalent conditions are satisfied, we call f j + s i c im i a lifting of f j. 2 An algorithm to compute the partial relations of H(2, n) We shall make essential use of Theorem1.1, in order to prove that the maximal minors in Hankel matrix H 3,n form a Sagbi basis of the coordinate ring of the Hankel variety H(2, n). According to the Theorem 1.1, we proceed as follows: 1) we choose one of the binomial relations of the initial terms of the minors in the initial algebra listed in Section 1 and replace in the relation the initial terms by the corresponding minors to obtain the element f 1 A 2,n, and determine its initial term. 2)If in < f 1 is not a product of the initial terms of two minors of H 3,n, then the relation f 1 is not liftable. In this case the minors of Hankel matrix do not form a Sagbi basis (in our case this never happened). If in < f 1 is a product of the initial terms of two minors m 1, m 2, then we add a suitable multiple of m 1 m 2 to f 1 to obtain f 2 with the property that in < f 2 < in < f 1. We proceed in the same way with f 2 as for f 1. In the following theorem we apply the algorithm and we find the general expression of the first and second lifting of the binomial relations of Machado. Theorem 2.1. The binomial relations of the K algebra A 2,n, have lifting polynomials of length 3 if they are Plücker relations and of length 4 if they are Hankel relations. More precisely, depending on the Machado inequalities, we have the following liftings and partial liftings: (I) e < b, c f (IA) c = f, a < d < e < b < c [ a b c d e f [ a e c d b c [ a d c + e b c (IB) c < f, a < d < e < b < c < f [ [ a b c a e c d e f d b f [ a e b + d c f [ a d c + e b f + (II) e < b, f < c, a < d < e < b f < c [ [ a b c a e f d e f d b c [ a d f + e b c [ a d e + b f c + 3

24 (III) b e, f < c (IIIA) a = d, a < b < e < f < c [ a b c a e f [ a b f a e c [ a b e + a f c (IIIB) b = e, a < d < b < f < c [ a b c d b f [ a b f d b c [ a d b + b f c (IIIC) b d, a < b d < e < f < c [ [ a b c a b f d e f d e c [ a b e + d f c [ + a b f 2 d + 1 e + 1 c + (IIID) b > d, a < d < b < e < f < c [ [ a b c a b f d e f d e c [ a b e + d f c [ a d b + e f c +... (IV) 2 d b, e c 1, d 1 < c (IVA) a < b << d < e 1 c < e < f [ [ a b c a d 1 c d e f b + 1 e f [ + a d e 1 b + 1 c f [ + a d e b + 1 c f 1 + (IVB) d = c = e 1, a < b << e 1 < e < f 1 < f [ [ a b e 1 a e 2 e 1 e 1 e f b + 1 e f [ a e 2 e + + b + 2 e f 1 [ + a e 1 e b + 1 e 1 f 1 + (IVB1) d = c = e 1, e = f 1, a < b << e 1 < e < e + 1 [ a b e 1 e 1 e e + 1 [ a e 2 e 1 b + 1 e e + 1 [ + a e 1 e b + 1 e 1 e [ a + 1 e 2 e 1 + b e e (IVC) d < e 1, c = e, [ a b e d e f a < b << d < e 1 < e < f [ [ a d 1 e + b + 1 e f a d e b + 1 e 1 f [ a d e 1 b + 1 e f + 4

25 (IVD) d = e 1, c = e, [ a b e e 1 e f a < b << e 1 < e < f 1 < f [ [ a e 2 e + b + 1 e f a e 1 e b + 1 e 1 f (IVD1) d = e 1, c = e, f = e + 1, a < b << e 1 < e < e + 1 [ [ a b e a e 2 e + e 1 e e + 1 b + 1 e e + 1 [ a + 1 e 2 e + b e e + 1 [ [ + a e 1 e b + 1 e f 1 a e 1 e b + 1 e 1 e (IVE) a < b << d < e < c < f [ [ a b c a d 1 c d e f b + 1 e f [ + a d c b + 1 e 1 f [ + a d c 1 b + 1 e f + (V) d b 1, 2 e c, c + 1 < f (VA) b = d 1, d < c, c + 3 < f, [ [ a d 1 c a d 1 e 1 d e f d c + 1 f a < b < d < c << e < f + [ a d 1 e d c + 1 f 1 [ a d 1 e 1 d c + 2 f 1 + (VA1) b = d 1, d = c, d = f 3, a < d 1 < d << d + 2 < d + 3 [ [ [ a d 1 d a d 1 d + 1 a d 1 d d d + 2 d + 3 d d + 1 d + 3 d d + 1 d + 2 [ a + 1 d d + 1 d 1 d d (VB) b = d 1, d < c, c + 3 = f a < b < d < c << c + 2 < c + 3 [ [ [ a d 1 c a d 1 c + 1 a d 1 c d c + 2 c + 3 d c + 1 c + 3 d c + 1 c + 2 [ a d c + 2 d c c (VC) d b, c + 3 < f a < d b < c << e < f [ [ a b c a b e 1 d e f d c + 1 f + [ a b e d c + 1 f 1 [ a b c d e f 1 + (VC1) d b, c + 3 = f, a < d b < c << e < f [ [ a b c a b c + 1 d c + 2 c + 3 d c + 1 c + 3 [ a b c d c + 1 c + 2 [ a b + 1 c + 1 d c c (VI) 2 d b, 2 e c 5

26 (VIA) c < d, a < b < c < d < e < f, c + 3 < f [ a b c d e f [ a d 1 e 1 b + 1 c + 1 f [ + a d 1 e b + 1 c + 1 f 1 [ + a d e b + 1 c + 1 f 2 + (VIA1) c < d, c + 3 = f, a < b < c < c + 1 < c + 2 < c + 3 [ [ a b c a c c + 1 c + 1 c + 2 c + 3 b + 1 c + 1 c + 3 [ a + 1 c c b c + 1 c + 3 [ + a c c + 2 b + 1 c + 1 c (VIB) d c, c + 3 < f, [ [ a b c d e f a < b << d c << e < f [ a d 1 e 1 + b + 1 c + 1 f a d 1 e b + 1 c + 1 f 1 [ + a d 1 e 1 b + 1 c + 2 f 1 + (VIB1) d c, a < b << d c << c + 2 < c + 3 [ [ a b c a d 1 c + 1 d c + 2 c + 3 b + 1 c + 1 c + 3 [ + a d 1 c + 2 b + 1 c + 1 c + 2 [ + a d c + 1 b + 1 c + 1 c + 2 Proof. : In the following we describe how find the liftings according to our algorithm. + (IA) c = f, a < d < e < b < c The binomial relation ( ) a b c d e c ( a e ) c d b c is replaced by the difference of the products [ [ a b c a e c d e c d b c, of the corresponding minors. Here [ a b c d e c [ a e c d b c x a x b x c x d x e x c = x a+1 x b+1 x c+1 x d+1 x e+1 x c+1 x a+2 x b+2 x c+2 x d+2 x e+2 x c+2 x a x e x c x d x b x c = x a+1 x e+1 x c+1 x d+1 x b+1 x c+1 x a+2 x e+2 x c+2 x d+2 x b+2 x c+2 (1) (2). All monomials of (1) divisible by x a x d x e+1 or x a x d x e+2 cancel against monomials in the support of (2). Next in the lexicographical order in (1) consider the monomials 6

27 divisible by x a x d+1 x e, m 1 = x a x d+1 x e x b+1 x c+2 x c+2 > x a x d+1 x e x b+2 x c+1 x c+2. Since m 1 does not appear in (2) it follows that in < ((1) (2)) = m 1 that gives [ a d c e b c which is: x a x d x c x e x b x c x a+1 x d+1 x c+1 x e+1 x b+1 x c+1 x a+2 x d+2 x c+2 x e+2 x b+2 x c+2 (3). It is easy to verify that the remaining terms in the sum of (1) (2) + (3) vanish. Therefore (IA) is the desired lifting. For the following we always adopt this procedure. Then we leave out sometimes some details and ripetitions. The employed monomial order is the lexicographic order and the order of the variables [ is the usual x 1 > x 2 >... > x n+2. Moreover, a d c we will identify the symbol with the corresponding product of minors e b f and the difference between two symbols by the difference (i) (j) on the right of the difference of the two symbols. (IB) c < f a < d < e < b < c < f [ a b c d e f [ a e c d b f = (4) (5) The monomials of (4), with {a, d, e + 1} in their support, consist of the unique monomial x a x d x e+1 x b+1 x c+2 x f+2 that vanishes with a product of (5), while f 1 = x a x d x e+1 x b+2 x c+1 x f+2 of (4) and f 2 = x a x d x e+1 x b+2 x c+2 x f+1 of (5) do not vanish. Then the in < ((4) (5)) = f 1, that gives [ a e b d c f = (6). Now in (4) and (5) the products with {a, d, e + 1} in the support are f 1 and f 2 that vanish with products of (6). Then we consider the products with {a, d, e+2} in their support, x a x d x e+2 x b+2 x c+1 x f+1, x a x d x e+2 x b+1 x c+2 x f+1 that vanish with products of (5) and (6), in (5) x a x d x e+2 x b+1 x c+1 x f+2 vanishes with a monomial of (6). Finally we consider the monomials with {a, d + 1} in their support starting by x a, x d+1, x e that are only in (4): f 3 = x a x d+1 x e x b+1 x c+2 x f+2 and x a x d+1 x e x b+2 x c+1 x f+2. Then in < ((4) (5) + (6)) = f 3 that gives: [ a d c e b f But the procedure can continue and we can obtain other pieces in the lifting.. 7

28 (II) e < b and f < c, a < d < e < b < f < c [ a b c d e f [ a e f d b c = (7) (8) Consider first the monomials of (7) with {a, d} in the support: x a x d x e+1 x b+1 x f+2 x c+2, x a x d x e+2 x b+1 x f+1 x c+2, x a x d x e+1 x b+2 x f+2 x c+1, x a x d x e+2 x b+2 x f+1 x c+1 vanish with products of (8). Then we consider the monomials with {a, d + 1} in the support: f 1 = x a x d+1 x e x b+1 x f+2 x c+2 do not vanish. Then in < ((7) (8)) = f 1 gives: [ a d f e b c = (9). Now in (9) there are not monomials whose support contains {a, d} and we consider the monomials containing the variables indexed by a, d + 1. The monomials f 1 and x a x d+1 x e x b+2 x f+2 x c+1 of (7) vanish with monomials of (9). The monomials x a x d+1 x e+1 x b+2 x f+2 x c, x a x d+1 x e+1 x b x f+2 x c+2 of (8) vanish with monomials of (9). The monomial f 2 = x a x d+1 x e+2 x b x f+1 x c+2 of (8) does not vanish. Then in < ((7) (8) + (9)) = f 2 gives [ a d e b f c. (IIIA) a = d, a < b < e < f < c [ a b c a e f [ a b f a e c = (10) (11) Consider first the monomials with {a, b + 1} in the support(since there are not monomials with a and b ): in (10) x 2 ax b+1 x e+1 x f+2 x c+2 vanishes with a monomial of (11), while f 1 = x 2 ax b+1 x e+2 x f+1 x c+2 does not vanish. Then in < ((10) (11)) = f 1 that gives [ a b e + = (12) a f c Now the remaining monomials in the sum (10) (11) + (12) vanish at all. Then (IIIA) is a relation. (IIIB) b = e, a < d < b < f < c [ a b c d b f [ a b f d b c = (13) (14) Consider first the monomials with the variables indexed by a, d: the monomials of (13) x a x d x b+1 x b+1 x f+2 x c+2, x a x d x b+1 x b+2 x f+1 x c+2, x a x d x b+1 x b+2 x f+2 x c+2, x a x d x b+2 x b+2 x f+1 x c+1 vanish with monomials of (14). The monomials with {a, d+1} in the support are x a x d+1 x b x b+1 x f+2 x c+2 of (13) that vanishes with a monomial of (14) and f 1 = x a x d+1 x b x b+2 x f+1 x c+2 of (14) that does not vanish. Then in < ((13) (14)) = f 1 that gives: [ a d b + (15) b f c 8

29 Now the remaining monomials in (13) (14) + (15) vanish at all. Then (IIIB) is a relation. (IIIC) b d, a < b < e < f < c [ a b c d e f [ a b f d e c = (16) (17) Consider first the monomials with the variables indexed by a, b+1: x a x b+1 x d x e+1 x f+2 x c+2 of (16) vanishes with a monomial of (17), while the monomial f 1 = x a x b+1 x d x e+2 x f+1 x c+2 of (16) does not vanish. Then in < ((16) (17)) = f 1 that we write as [ a b e + = (18) d f c Now the monomials f 1 of (16) and x a x b+1 x d x e+2 x f+2 x c+1 of (17) vanish with monomials of (18). Consider the monomials with {a, d + 1} in the support. The monomials x a x b+1 x d+1 x e x f+2 x c+2, x a x b+1 x d+1 x e+2 x f x c+2 of (16) vanish with monomials of (17) and (18). The monomial x a x b+1 x d+1 x e+2 x f+2 x c of (17) vanishes with a monomial of (18). The monomial f 2 = x a x b+1 x d+1 x f x e+2 x c+2 of (17) does not vanish. Then the in < ((16) (17) + (18)) = f 2 that gives: [ + a d e b + 1 f 1 c. (IIID) b > d, a < d < b < e < f < c [ a b c d e f [ a b f d e c = (19) (20) Consider the monomials with the variables indexed by a, d: x a x d x b+1 x e+1 x f+2 x c+2 of (19) vanishes with a monomial of (20) while f 1 = x a x d x b+1 x e+2 x f+1 x c+2 of (19) does not vanish. Then the in < ((19) (20)) = f 1 that gives: [ a b e + = (21). d f c Now f 1 of (19) and x a x d x b+1 x e+2 x f+2 x c+1 of (20) vanish with monomials of (21). Consider the monomials containing the variables indexed by a, d, b+2: x a x d x b+2 x e+1 x f+2 x c+1, x a x d x b+2 x e+2 x c+1 x f+1 of (19) vanish with monomials of (20) and (21); x a x d x b+2 x e+1 x f+1 x c+2 of (20) vanishes with monomials of (21). Consider the monomials with {a, d+1, b+1} in the support: x a x d+1 x b+1 x e+2 x f x c+2, x a x d+1 x b+1 x e x f+2 x c+2 of (19) vanish with monomials of (20) and (21). x a x d+1 x b+1 x e+2 x f+2 x c of (20) vanishes with a monomial of (21). Consider the monomials with {a, d + 1, b + 2} in the support: f 2 = x a x d+1 x b+2 x e x f+1 x c+2 of (20) does not vanish. Then in < ((19) (20) + (21)) = f 2 that gives [ a d b + e f c The expression is not yet a relation since all the monomials do not vanish. It is easy to check that the monomial x a+1 x d+2 x b x e+1 x f x c+2 is the next lifting. 9.

30 (IVA) a < b << d < e 1 c < e < f [ a b c d e f [ a d 1 c b + 1 e f = (22) (23) Consider the monomials with the variables indexed by a, b + 1, d (since there are not monomials with a and b ): in (22) x a x b+1 x d x c+2 x e+1 x f+2, x a x b+1 x d x c+2 x e+2 x f+1 vanish with monomials of (23). Then we consider monomials with {a, b + 1, d + 1} in the support. We have x a x b+1 x d+1 x c+2 x e+2 x f, x a x b+1 x d+1 x c+2 x e x f+2 in (22), f 1 = x a x b+1 x d+1 x c+1 x e+1 x f+2, x a x b+1 x d+1 x c+1 x e+2 x f+1 in (23). All of them do not vanish. Then in < ((22) (23)) = f 1 that gives [ a d e 1 b + 1 c f = (24). Now in (24) there are not monomials containing the variables indexed by a, b + 1, d. Beginning by a, b + 1, d + 1, x a x b+1 x d+1 x c+1 x e+1 x f+2 vanishes with a monomial of (23), but x a x b+1 x d+1 x c+2 x e+2 x f, x a x b+1 x d+1 x c+2 x e x f+2 in (22), f 2 = x a x b+1 x d+1 x c+1 x e+2 x f+1 in (23) and x a x b+1 x d+1 x c+2 x e+1 x f+1 in (24) do not vanish. Then in < ((22) (23) + (24)) = f 2 that gives: [ a d e b + 1 c f 1. (IVB) d = c = e 1 [ a b e 1 e 1 e f a < b << e 1 < e < f 1 < f [ a e 2 e 1 b + 1 e f = (25) (26) Consider the monomials with {a, b+1, e 1} in the support (since there are not monomials with x a and x b ): in (25) x a x b+1 x e 1 x e+1 x e+1 x f+2, x a x b+1 x e 1 x e+1 x e+2 x f+1 vanish with monomials of (26). Considering the monomials with {a, b + 1, e}, in (25) x a x b+1 x e x e x e+1 x f+2 vanishes with a monomial of (26) while f 1 = x a x b+1 x e x e x e+2 x f+1 of (26) does not vanish. Then in < ((25) (26)) = f 1 that we write as [ a e 1 e b + 1 e 1 f 1 = (27). Now in (27) there are not monomials with {a, b+1, e 1} in the support and consider again monomials containing x a, x b+1, x e : x a x b+1 x e x 2 e+1 x f+1, x a x b+1 x e x e+1 x e+2 x f of (25) vanish with monomials of (27). Consider the monomials with {a, b + 1, e + 1} in the support: x a x b+1 x 3 e+1 x f of (25) vanishes with a product of (27). Then consider the monomials containing the variables indexed by a, b+2, e 1: x a x b+2 x e 1 x e x e+1 x f+2, x a x b+2 x e 1 x e x e+2 x f+1 of (25) vanish with products of (26) and (27), while f 2 = x a x b+2 x e 1 x 2 e+1 x f+1 of (27) does not vanish. Then in < ((25) (26)+(27)) = f 2 that we write as [ a e 2 e 1 b + 2 e f 1. 10

31 (IVB1) d = c = e 1, e + 1 = f a < b << e 1 < e < e + 1 [ [ a b e 1 a e 2 e 1 e 1 e e + 1 b + 1 e e + 1 = (28) (29). Consider first the monomials with {a, b + 1, e 1} in the support (since there are not monomials with x a and x b ): in (28) x a x b+1 x e 1 x 2 e+1 x e+3, x a x b+1 x e 1 x e+1 x 2 e+2 vanish with monomials of (29). Considering the monomials with x a, x b+1, x e, in (28) we have x a x b+1 x 2 ex e+1 x e+3, that vanishes with a monomial of (29); in (29) we have only f 1 = x a x b+1 x 2 ex 2 e+2. Then in <((28) (29)) = f 1 that we write as [ a e 1 e b + 1 e 1 e = (30). Now in (30) there are not monomials with x a, x b+1, x e 1 in the support and we consider the monomials with x a, x b+1, x e : 2x a x b+1 x e x 2 e+1 x e+2 of (28) vanishes with a monomial of (30). Consider the monomials with {a, b + 1, e + 1} in the support: x a x b+1 x 4 e+1 of (28) vanishes with a monomial of (30). Then we consider the monomials containing x a, x b+2, x e 1. It easy to check that in the sum (28) (29)+(30) all the monomials with the variable x a vanish. Then we consider monomials containing x a+1, x b. They are only in (28) where f 2 = x a+1 x b x e 1 x 2 e+1 x e+3 does not vanish. Then in < ((28) (29) + (30)) = f 2 that we write as [ a + 1 e 2 e 1 b e e + 1. (IVC) d < e 1, c = e, a < b << d < e 1 < e < f [ a b e d e f [ a d 1 e b + 1 e f = (31) (32) Consider first the monomials with {a, b + 1, d} in the support: in (31) x a x b+1 x d x e+1 x e+2 x f+2, x a x b+1 x d x e+2 x e+2 x f+1 vanish with monomials of (32). Then we consider the the monomials with {a, b + 1, d + 1} in the support: f 1 = x a x b+1 x d+1 x e x e+2 x f+2 of (31) does not vanish and then in < ((31) (32)) = f 1 that we write as [ a d e b + 1 e 1 f = (33). Now we consider the product of (31), (32), (33) containing the variables indexed by a, b + 1, d + 1, e + 1: f 2 = x a x b+1 x d+1 x e+1 x e+1 x f+1 does not vanish. Then in < ((31) (32) + (33)) = f 2 that we write as [ a d e 1 b + 1 e f. (IVD) d = e 1 c = e [ a < b << e 1 < e < f 1 < f a b e e 1 e f [ a e 2 e b + 1 e f = (34) (35) 11

32 Consider the monomials with the variables indexed by a, b + 1, e 1 (since there are not monomials with a and b): x a x b+1 x e 1 x e+1 x e+2 x f+2, x a x b+1 x e 1 x e+2 x e+2 x f+1 of (34) vanish with products of (35). Then we consider the products containing the variables indexed by a, b + 1, e: f 1 = x a x b+1 x e x e x e+2 x f+2 of (34) does not vanish. Then in < ((34) (35)) = f 1 that we write as [ + a e 1 e b + 1 e 1 f = (36) Now we consider the monomials in (34), (35), (36) containing the variables indexed by a, b + 1, e, e + 1: f 2 = x a x b+1 x e x e+1 x e+2 x f+1 of (34) vanishes with a product of (35), x a x b+1 x e x e+1 x e+1 x f+2 of (35) vanishes with a product of (36). But there is another product f 3 in (36) equal to f 2 that does not vanish. Then in < ((34) (35)+(36)) = f 3 that we write as [ + a e 1 e b + 1 e f 1 (IVD1) d = e 1 c = e =, f = e + 1 a < b << e 1 < e < e + 1. This case is the same of (IVD) until the first lifting, just putting f = e + 1. Computing the second lifting, the monomial x a x b+1 x e x e+1 x e+2 x e+2, corresponding to f 3 of (VD), vanishes with a monomial of (34). Then we consider monomials with {a, b + 1, e + 1} in the support: x a x b+1 x e+1 x e+1 x e+1 x e+2 vanishes with a product of (36). The monomials with {a, b+2, e 1} in the support: x a x b+2 x e 1 x e+1 x e+1 x e+3 of (34) and x a x b+2 x e 1 x e x e+2 x e+3 of (35) vanish with monomials of (36). The monomials containing the variables indexed by a, b + 2, e: 2x a x b+2 x e x e+1 x e+1 x e+2 and x a x b+2 x e x e x e+1 x e+3 of (34) vanish with monomials of (35) and (36). The monomials having {a, b + 2, e + 1} in the support are x a x b+2 x e+1 x e+1 x e+1 x e+1 of (34) that vanishes with a monomial of (36). The monomials containing the variables indexed by a, b + 3 are x a x b+3 x e 1 x e x e+2 x e+2 and x a x b+3 x e 1 x e+1 x e+1 x e+2 of (35) that vanish with monomials of (36).Finally, consider the monomials containing the variables indexed by a+1, b that there exist only in (34):f 3 = x a x b x e 1 x e+1 x e+2 x e+3, f 4 = x a+1 x b x e 1 x e+2 x e+2 x e+3. Then in < ((34) (35) + (36)) = f 3 that we write as [ a + 1 e 2 e b e e + 1. (IVE) a < b << d < e < c < f [ a b c d e f [ a d 1 c b + 1 e f = (37) (38) Consider first the products containing the monomials with the variables a, b + 1, d (since there are not monomials with variables indexed by a, b): in (37) x a x b+1 x d x e+1 x c+2 x f+2, x a x b+1 x d x e+2 x c+2 x f+1 vanish with products of (38). Then we consider the products containing variables indexed by a, b + 1, d + 1: f 1 = x a x b+1 x d+1 x e+2 x c+2 x f and f 2 = x a x b+1 x d+1 x e x c+2 x f+2 of (37), f 3 = x a x b+1 x d+1 x e+1 x c+1 x f+2 and f 4 = 12

33 x a x b+1 x d+1 x e+2 x c+1 x f+1 of (38); f 1, f 2, f 3, f 4 do not vanish and then in < ((37) (38)) = f 2, that we write [ a d c b + 1 e 1 f 1 = (39). Now we consider f 5 = x a x b+1 x d+1 x e+1 x c+2 x f+1 of (39) with {a, b + 1, d + 1}. f 1, f 2, f 3, f 4, f 5 do not vanish. Then in < ((37) (38) + (39)) = f 3 that we write as [ a d c 1 b + 1 e f (VA) b = d 1, d < c, c + 3 < f a < b < d < c << e < f [ [ a d 1 c a d 1 e 1 = (40) (41) d e f d c + 1 f We consider the monomials with variables indexed by a, d (since there are not monomials with variables indexed by a and d 1): x a x d x d x c+2 x e+1 x f+2 of (40) vanishes with a product of (41), while f 1 = x a x d x d x c+2 x e+2 x f+1 of (40) and f 2 = x a x d x d x c+3 x e+1 x f+1 of (41) do not vanish. Then in < ((40) (41)) = f 1 that gives: [ a d 1 e d c + 1 f 1. = (42) Now f 2 does not vanish with any product of (41) and (42). Then in < ((40) (41) + (42)) = f 2 that gives: [ a d 1 e 1. d c + 2 f 1 (VA1) b = d 1, d = c, d = f 3, a < d 1 < d << d + 2 < d + 3 [ [ a d 1 d a d 1 d + 1 = (43) (44) d d + 2 d + 3 d d + 1 d + 3 We consider the monomials with {a, d} in the support (since there are not monomials with variables indexed by a and d 1 ): in (43) x a x d x d x d+2 x d+3 x d+5 vanishes with a product of (44) while f 1 = x a x d x d x d+2 x d+4 x d+4 of (43) and f 2 = x a x d x d x d+3 x d+3 x d+4 of (44) do not vanish. Then in < ((43) (44)) = f 1 that we can write: [ a d 1 d + 2 = (45). d d + 1 d + 2 Now in (43) and (44) the monomials with {a, d} in the support are f 1 and f 2 that vanish with products of (45). Then we consider the monomials containing variables x a, x d, x d+1. In (43) x a x d x 2 d+1 x d+3x d+5, x a x d x d+1 x 2 d+2 x d+5 vanish with monomials of (44), x a x d x 2 d+1 x d+4x d+4 vanishes with a monomial of (45). In (43) and (45) 2x a x d x d+1 x d+2 x d+3 x d+4 vanishes with monomials of (44) and (45). In (44) x a x d x d+1 x d+2 x d+3 x d+4, x a x d x d+1 x 3 d+3 vanish with products of (45). The products containing the variables x a, x d, x d+2 are in (43) x a x d x 2 d+2 x2 d+3 and x ax d x 3 d+2 x d+4 13

34 that vanish with products of (44) and (45); the products containing the variables x a, x d+1 are in (43) x a x d x 2 d+1 x d+2x d+5 and x a x 2 d+1 x d+2x d+3 x d+5 that vanish with products of (44), x a x d x 2 d+1 x d+3x d+4 and x a x 2 d+1 x d+2x 2 d+3 that vanish with products of (45); in (45) x a x 2 d+1 x d+2x 2 d+3 and x ax d+1 x 3 d+2 x d+3 that vanish with products of (46).The products containing the variables x a, x d 1 are x a+1 x d 1 x d+1 x 2 d+3 x d+4 and x a+1 x d 1 x d+1 x 2 d+2 x d+5 in (43), x a x d 1 x d+1 x 3 d+3 and f 3 = x a+1 x d 1 x 2 d+1 x d+3x d+5 in (44), x a+1 x d 1 x d+1 x d+2 x d+3 x d+4, x a+1 x d 1 x 2 d+1 x2 d+4 in (45). All of the previous products do not vanish and in < ((43) (44)) = f 3 that we can write as [ a + 1 d d + 1 d 1 d d + 3 (VB) b = d 1,d < c, c + 3 = f a < d 1 < d < c << c + 2 < c + 3 [ [ a d 1 c a d 1 c + 1 = (46) (47) d c + 2 c + 3 d c + 1 c + 3 We consider the monomials with variables x a, x d (since there are not monomials with x a, x d 1 ): in (46) x a x 2 d x c+2x c+3 x c+5 vanishes with a product of (47), while f 1 = x a x 2 d x c+2x 2 c+4 ( of (46))and f 2 = x a x 2 d x2 c+3 x c+4 ( of (47)) do not vanish. Then in < ((46) (47)) = f 1 that gives [ a d 1 c + 2 d c + 1 c + 2. = (48). Now f 2 does not vanish with any product of (46) and (48). Then in < ((46) (47) + (48)) = f 2 and we have: (VC) d b, c + 3 < f, [ a d c + 2 d c c + 2 a < d b < c << e < f [ a b c d e f [ a b e 1 d c + 1 f. = (49) (50) We consider first the monomials with variables x a, x d : x a x d x b+1 x c+2 x e+1 x f+2 (in (49)) vanishes with a product of (50), while f 1 = x a x d x b+1 x c+2 x e+2 x f+1 (in (49))and f 2 = x a x d x b+1 x c+3 x e+1 x f+1 (in (50)) do not vanish. Then in < ((49) (50)) = f 1, that we write as [ a b e = (51). d c + 1 f 1 Now f 2 does not vanish with any product of (49) and (50). Then in < ((49) (50) + (51)) = f 2 that we write as [ a b c d e f 1 14

35 (VC1) d b, c + 3 = f, a < d b < c << e < f [ [ a b c a b c + 11 = (52) (53). d c + 2 c + 3 d c + 1 c + 3 We consider the monomials with {a, d} in the support: in (52) x a x d x b+1 x c+2 x c+3 x c+5 vanishes with a product of (53), while f 1 = x a x d x b+1 x c+2 x c+4 x c+4 ( in (52))and f 2 = x a x d x b+1 x c+3 x c+3 x c+4 ( in (53))do not vanish. Then in < ((52) (53)) = f 1 that we write as [ a b c + 2 = (54). d c + 1 c + 2 Now f 1 and f 2 vanish with products of (54) and we consider the monomials containing x a, x d, x b+2. The monomial f 3 = x a x d x b+2 x c+1 x c+3 x c+5 does not vanish. Then in < ((52) (53) + (54)) = f 3 that we write as [ a b + 1 c + 1 d c c + 3. (VIA) c < d, c + 3 < f, a < b < c < d < e < f [ a b c d e f [ a d 1 e 1 b + 1 c + 1 f = (55) (56) Consider the monomials with {a, b + 1} (since there are not monomials containing x a, x b ): in (55) we have x a x b+1 x c+2 x d x e+1 x f+2 (that vanishes with a product of (56)), f 1 = x a x b+1 x c+2 x d x e+2 x f+1 and x a x b+1 x c+3 x d x e+1 x f+1. Then in < ((55) (56)) = f 1 that gives: [ a d 1 e b + 1 c + 1 f 1 = (57). Considering the monomials containing variables x a, x b+1 starting by x a, x b+1, x c+2, we observe that the monomials f 1 and x a x b+1 x c+2 x d+1 x e x f+2 of (55) vanish with products of (57) and (56), while f 2 = x a x b+1 x c+2 x d+1 x e+2 x f of (55) do not vanish. Then in < ((55) (56) + (57)) = f 2 that gives [ a d e b + 1 c + 1 f 2 (VIA1) c < d, c + 3 = f, a < b < c < c + 1 < c + 2 < c + 3. [ a b c c + 1 c + 2 c + 3 [ a c c + 1 b + 1 c + 1 c3+ = (58) (59). We consider the monomials with {a, b+1} in the support (since there are not monomials containing x a, x b ).The products containing x c+1 in (58) are x a x b+1 x c+1 x c+2 x c+3 x c+5 15

36 that vanishes with a product of (59) and f 1 = x a x b+1 x c+1 x c+2 x c+4 x c+4. In (59) we have f 2 = x a x b+1 x c+1 x c+3 x c+3 x c+4. Then in < ((58) (59)) = f 1, hence [ a c c + 2 b + 1 c + 1 c + 2 = (60). Now the monomials containing x a, x b+1, x c+1 are x a x b+1 x c+1 x c+2 x c+4 x c+4, that vanishes with f 1, and x a x b+1 x c+1 x c+3 x c+3 x c+4,that vanishes with f 2. Then we have to consider the products containing x a, x b+1, x c+2 in (58), (59) and (60): in (58), x a x b+1 x c+2 x c+2 x c+3 x c+4 and x a x b+1 x c+2 x c+2 x c+2 x c+5 vanish with two products of (59), x a x b+1 x c+2 x c+2 x c+3 x c+4 and x a x b+1 x c+2 x c+3 x c+3 x c+3 vanish with two products of (60). Then we consider the products containing x a, x b+2, beginning with x a, x b+2, x c+1 : in (58) x a x b+2 x c+1 x c+1 x c+3 x c+5, x a x b+2 x c+1 x c+3 x c+3 x c+3, x a x b+2 x c+1 x c+2 x c+2 x c+5 vanish with products of (59), x a x b+2 x c+1 x c+1 x c+4 x c+4, 2x a x b+2 x c+1 x c+2 x c+3 x c+4 vanish with products of (60). In (59) x a x b+2 x c+2 x c+2 x c+3 x c+3 vanishes with a term of (60). Finally we consider the products containing x a+1, x b. They appear only in (58) and so in < ((58) (59) + (60)) = x a+1 x b x c+1 x c+2 x c+3 x c+5 that gives: [ a + 1 c c + 1 b c + 1 c + 3. (VIB) d c, c + 3 < f a < b << d c << e < f [ a b c d e f [ a d 1 e 1 b + 1 c + 1 f = (61) (62). We consider the monomials with {a, b+1} in the support (since there are not monomials with x a, x b ): in (61) x a x b+1 x d x c+2 x e+1 x f+2 vanishes with a product of (62), while f 1 = x a x b+1 x d x c+2 x e+4 x f+1 of (61) does not vanish. Then in < ((61) (62)) = f 1 that gives [ a d 1 e = (63). b + 1 c + 1 f 1 Now f 1 vanishes with a monomial of (63), while f 2 = x a x b+1 x d x c+3 x e+1 x f+1 does not vanish. Then in < ((61) (62) + (63)) = f 2 that gives [ + (VIB1) d c a < b << d c << c + 2 < c + 3 a d 1 e 1 b + 1 c + 2 f 1. [ a b c d c + 2 c + 3 [ a d 1 c + 1 b + 1 c + 1 c + 3 = (64) (65) Consider the monomials with the variables x a, x b + 1 (since there are not monomials with x a, x b ): we have in (64) x a x b+1 x d x c+2 x c+3 x c+5, that vanishes with a product of 16

37 (65), f 1 = x a x b+1 x d x c+2 x c+4 x c+4 and f 2 = x a x b+1 x d x c+3 x c+3 x c+4. Then in < ((64) (65)) = f 1 that we write as [ a d 1 c + 2 b + 1 c + 1 c + 2. = (66) Here the products containing x a, x b+1, X d vanishes with products of (64) and (65). So we consider all products containing x a, x b+1, x d+1. In (64) the products x a x b+1 x d x c+2 x c+4 x c+4, x a x b+1 x d+1 x c+2 x c+3 x c+4, x a x b+1 x d+1 x c+2 x c+2 x c+5, x a x b+1 x d+1 x c+2 x c+2 x c+5 vanish with products of (64) or (65). The remaining products are f 2 = x a x b+1 x d+1 x c+2 x c+3 x c+4 and x a x b+1 x d+1 x c+3 x c+3 x c+3. Then in < ((64) (65) + (66)) = f 2 that we write as [ a d c + 1 b + 1 c + 1 c + 2 References [1 W.Bruns, J.Herzog, Cohen-Macaulay rings, Cambridge Univ.,1998 [2 A. Capani - G. Niesi - L. Robbiano, A system for doing computations in commutative algebra Available via anonymous ftp from: cocoa.dima.unige.it. [3 A.Conca, J.Herzog, G.Valla,Sagbi bases with applications to blow-up algebra, J.reine angew. Math. 474, ,1998. [4 G.Failla, S. Giuffrida, On the Hankel lines variety H(1, n) of G(1, n), African Diaspora Journal of Mathematics, vol. 8, Number 2, (2009), [5 G.Failla, On certain loci of Hankel r-planes of P m, preprint 2010 [6 G.Failla, On the defining equations of the Hankel varieties H(2, n), preprint 2011 [7 S. Giuffrida - R. Maggioni, Hankel Planes, J. of Pure and Appl. Algebra 209 (2007), [8 P.F.Machado, The initial algebra of maximal minors of a generalized hankel matrix, Comm. in Algebra, Vol. 27, Issue 1, (1999), [9 L.Robbiano, M.Sweedler,Subalgebras bases, Proceedings, Salvador 1988, Eds. W.Bruns, A.Simis, Lect.Notes Math (1990), Eingegangen am:

38 - 32 -

39 EIN PROBLEM DER KONVEXEN OPTIMIERUNG MIT RELEVANZ Eugen Grycko 1, Werner Kirsch 2, Tobias Mühlenbruch 3, Frank Recker 4 1,2,3 Department of Mathematics and Computer Science University of Hagen Universitätsstrasse 1 D Hagen, GERMANY 4 General Reinsurance AG Theodor-Heuss-Ring 11 D Köln, GERMANY 1 eugen.grycko@fernuni-hagen.de 2 werner.kirsch@fernuni-hagen.de 3 tobias.muehlenbruch@fernuni-hagen.de 4 frank.recker@arcor.de Zusammenfassung: Das Ziel des Beitrags ist eine allgemeinverständliche Darstellung eines Zusammenhangs zwischen einem Benchmark-Problem der Theoretischen Informatik und einer Optimierungsaufgabe. Das Erfüllbarkeitsproblem für konjunktive Normalformen wird formuliert im Sinne des Kalküls der Boolschen Variablen, während in der Optimierungsaufgabe das Maximum einer strikt konvexen Funktion auf einem kompakten Polyeder gesucht wird, das mit einem strukturierten Boolschen Term assoziiert ist. Nach unserer Wahrnehmung gibt es für die Lösung beider Probleme keine effizienten Algorithmen; dennoch lässt sich dieser Beitrag als eine Brücke zwischen Theoretischer Informatik und Konvexer Optimierung interpretieren. Abstract: The aim of this contribution is a generally accessible presentation of an interrelation between a benchmark problem of Theoretical Computer Science and and Optimization. The satisfiability problem is formulated in terms of calculus of Boolean variables while in the optimization problem the maximum of a strict convex function on a compact polyhedron is searched for, the latter being associated with a structured Boolean term. 1

40 Although there is no efficient algorithm for the solution of the two problems known, the paper can be interpreted as a bridge between Theoretical Computer Science and Convex Optimization. Key Words: Adjunktion, Konjunktion, Zielfunktion, Nebenbedingung. 1. Einleitung In der Schulmathematik lernt man, dass die Gleichung (1.1) x = 0 keine Lösung hat. Nun haben Mathematiker ein Gebilde geschaffen, das alle bis dahin bekannten Zahlen umfasst und in dem alle wichtigen Rechenregeln gültig sind. In diesem mathematischen Überbau lässt sich (1.1) doch lösen. In der Mathematischen Logik rechnet man insbesondere mit Boolschen Variablen, die die zwei Werte true und f alse annehmen können. Eine Frage der Mathematischen Logik ist die nach der Erfüllbarkeit einer konjunktiven Normalform von Boolschen Variablen, die wir in Abschnitt 2 einführen. Die Theoretische Informatik stellt die Frage nach einem effizienten Algorithmus, mit dem sich die Erfüllbarkeitsfrage (rechnergestützt) beantworten ließe. Wir zeigen in Abschnitt 3, dass die Erfüllbarkeitsfrage für konjunktive Normalformen äquivalent ist mit einem Problem der Konvexen Optimierung, das wir als einen mathematischen Überbau zu der Erfüllbarkeitsfrage verstehen. Eine effiziente Lösung des präsentierten Optimierungsproblems mag eine Herausforderung für junge Mathematiker werden, zumal dadurch ein spektakuläres Problem der Theoretischen Informatik mit gelöst worden wäre, nämlich die Fragestellung P=NP, vgl. vs NP/. 2. Das Erfüllbarkeitsproblem für konjunktive Normalformen Eine Boolsche Variable A kann (Wahrheits-) Werte aus der Menge S := {true, false} annehmen. Der Wahrheitswert von A 1 sei stets identisch 2

41 mit dem von A. A 1 sei genau dann false wenn A den Wert true annimmt ( 1 steht für die Negation). Der Term A 0 möge stets den Wert false annehmen. Seien jetzt A 1,..., A d Boolsche Variablen. Die Adjunktion d A j = A 1... A d j=1 nimmt genau dann den Wert false an, wenn alle Variablen A 1,..., A d den wert f alse annehmen. Die Konjunktion d A j = A 1... A d j=1 nimmt genau dann den Wert true an, wenn alle Variablen A 1,..., A d den Wert true annehmen. Sei σ : {1,..., d} { 1, 0, 1} eine Abbildung. Für weitere Betrachtungen benötigen wir den Term d A σ(j) j ; j=1 er ist die Adjunktion der Terme A σ(j) j mit j = 1,..., d. Seien jetzt σ 1,..., σ l : {1,..., d} { 1, 0, 1} Abbildungen. Eine konjunktive Normalform mit den Variablen A 1,..., A d ist definiert als der Boolsche Term ( l d ) (2.1). i=1 (Konjunktion von Adjunktionen). j=1 A σ i(j) j Sei jetzt τ : {1,..., d} { 1, 1}; τ lässt sich als eine Belegung der Variablen A 1,..., A d mit Wahrheitswerten auffassen: { true falls τ(j) = 1 (2.2) A j := false falls τ(j) = 1 3

42 für j = 1,..., d. Also gibt es 2 d Belegungen der Variablen A 1,..., A d mit Wahrheitswerten. Die Menge aller Abbildungen τ : {1,..., d} { 1, 1} werde mit T d bezeichnet. Das angekündigte Erfüllbarkeitsproblem lässt sich nun folgendermaßen formulieren: Gibt es für die Variablen A 1,..., A d eine Belegung τ T d mit Wahrheitswerten gemäß (2.2) derart, dass die konjunktive Normalform (2.1) den Wert true zurück gibt? Eine naive Methode, diese Frage zu beantworten, wäre, die Werte der konjunktiven Normalform (2.1) für alle Belegungen τ T d auszurechnen. Da aber #T d = 2 d ist, würde der Aufwand des Brute-Force-Algorithmus in d exponentiell wachsen. 3. Ein Problem der Konvexen Optimierung Sei d 2. Die Menge V d := {( τ(1),..., τ(d) ) } τ T d besteht aus den Ecken (d.h. den Extremalpunkten) des d-dimensionalen Würfels W d := [ 1, 1 d R d. Wir betrachten die durch ϕ(x 1,..., x d ) := d j=1 x 2 j definierte Funktion ϕ : W d R. Das Optimierungsproblem (3.1) max x W d ϕ(x) =? wird genau in den 2 d Ecken v V d von W d gelöst: max x W d ϕ(x) = ϕ(v) = d (v V d ). 4

43 Für eine Abbildung σ : {1,..., d} { 1, 0, 1} setze (3.2) σ := d σ(j) = #{j σ(j) 0}. j=1 Mit σ assozieren wir die Nebenbedingung (3.3) d σ(j) x j σ + 1 j=1 für x = (x 1,..., x d ) W d. Restriktion (3.3) schneidet genau die Ecken v = ( τ(1),..., τ(d) ) Vd von W d ab, die den Belegungen τ T d von A 1,..., A d entsprechen, für die die Adjunktion den Wert false zurück gibt. d j=1 A σ(j) j 3.1 Beispiel: Sei d = 3. Die Abbildung σ : {1, 2, 3} { 1, 0, 1} sei gegeben durch σ(1) := 1, σ(2) := 1, σ(3) := 0. Abbildung σ induziert die Adjunktion (3.4) A 1 A 1 2 und somit gemäß (3.3) die Nebenbedingung (3.5) x 1 x 2 1. (3.5) schneidet die Ecken ( 1, 1, 1) und ( 1, 1, 1) des Würfels W 3 ab, die den Belegungen der Boolschen Variablen A 1, A 2, A 3 entsprechen, für die Adjunktion (3.4) den Wert false zurück gibt, was in Fig. 3.1 illustriert ist. 5

44 Fig. 3.1: Visualisierung der Nebenbedingung (3.5) Seien jetzt σ 1,..., σ l : {1,..., d} { 1, 0, 1} Abbildungen, welche die konjunktive Normalform (2.1) festlegen. Wir betrachten die Einschränkung (3.6) D := l { d } (x 1,..., x d ) W d σ i (j) x j σ i + 1 i=1 des Definitionsbereichs W d von ϕ. D enthält keine Ecke v = ( τ(1),..., τ(d) ) 6 j=1

45 des Würfels W d, für die es eine Adjunktion d j=1 A σ i(j) j mit i {1,..., l} gibt, die unter der Belegung τ den Wert false lieferte. Offenbar ist D ein kompaktes Polyeder, so dass das Maximum max ϕ(x) =: M x D existiert. Da ϕ strikt konvex ist, kann dieses Maximum nur in den Ecken von D angenommen werden. Die obigen Ausführungen mögen den Leser davon überzeugt haben, dass die folgenden drei Aussagen äquivalent sind: (i) M = d (ii) D V d (iii) Die konjunktive Normalform ( l d ist erfüllbar. i=1 j=1 A σ i(j) j Wir haben also eine Korrespondenz zwischen der Erfüllbarkeit von konjunktiven Normalformen und einem konvexen Optimierungsproblem hergestellt. ). Danksagung Wir bedanken uns bei Herrn Professor Winfried Hochstättler für stimulierende Gespräche, wonach wir uns motiviert fühlten, den vorliegenden Beitrag schriftlich zu fixieren. Die Autoren bedanken sich ebenfalls bei Herrn Dr. Dominique Andres für wertvolle Kommentare zum first draft des vorliegenden Papers. 7

46 Literatur S.A. Cook: The Complexity of Theorem Proving Procedures. In Annual ACM Symposium on Theory of Computing (STOC), pp , M.R. Garey, D.S. Johnson: Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, San Francisco Eingegangen am:

47 ON THE THERMAL VOLTAGE SIGNAL IN A VIRTUAL NANOCONDUCTOR Eugen Grycko 1, Werner Kirsch 2, Tobias Mühlenbruch 3 1,2,3 Department of Mathematics and Computer Science University of Hagen Universitätsstrasse 1 D Hagen, GERMANY eugen.grycko@fernuni-hagen.de werner.kirsch@fernuni-hagen.de tobias.muehlenbruch@fernuni-hagen.de Abstract: We consider the electronic gas in a virtual conductor which is described in terms of a modified Drude model. The attractiveness of this model has been rediscovered in the computer era when it has been recognized that a dynamics of the electrons can be efficiently implemented enabling us to carry out simulation experiments. Nonparametric statistical tools for the evaluation of the experiments are introduced and motivated by consistency results. It turns out that the thermal voltage in a virtual nanoconductor can be surprisingly high. Based on the statistical evaluation we propose the stationary Ornstein-Uhlenbeck process for the description of the thermal voltage as function of time; estimates of the parameters of the process are reported and discussed. AMS Subject Classification: 62G07, 62G20, 62M99, 82B30, 82B40. Key Words: α-mixing, estimator, system identification. 1. Introduction Recently we have perceived some interest in thermal noise of electric charge carriers cf. [3, [4 and [5. Statistical Thermodynamics offers a natural access to this kind of phenomena. In particular, the Drude model (cf. [13) is an attractive possibility for computer based exploration of electrodynamic 1

48 properties that result from thermal movement of charge carriers. We exemplify this idea for a virtual nanoconductor which can be analyzed under a reasonable computational effort. We assume that the thermal voltage signal is a trajectory of a stationary and α-mixing process; these assumptions specify a nonparametric statistical model in which the autoregression function and the marginal distribution are to be estimated. The estimation of the marginal density of a stationary process has been extensively studied during the last decades (cf. [8, [9, [10 and literature cited therein). To give the reader an impression of methodic justification of the kernel density estimator, we formulate and prove a weak consistency theorem which is based on mild conditions on the stochastic process; the interested reader is referred to [10 where a stronger result requiring a more sophisticated proof can be found. The paper is organized as follows. In Section 2 we introduce and comment α-mixing and stationarity as properties of stochastic processes that are reasonable for describing certain physical phenomena and imply consistency of a natural estimator for the autocovariance. In Section 3 the kernel density estimator is motivated and proposed for the estimation of the marginal distribution in an appropriate (nonparametric) class of stationary processes. In Section 4 we remark that an equidistant discrete sampling from a stationary Ornstein-Uhlenbeck process leads to a discrete time autoregressive process. In Section 5 the modified Drude model describing the electronic gas in a virtual conductor is presented together with a possibility of simulating its dynamics; inspired by standard facts from Electrodynamics it is pointed out how to extract the virtual thermal voltage values during the computational process. In Section 6 the computer experiment is specified; we also report and comment its outcome applying the statistical estimators motivated in Section 2 and Stationarity and Mixing in Discrete Time Let (Ω, A, P ) be a probability space and (X n ) n=1 a stochastic process with discrete time; this means that (X n ) n=1 is a sequence of real random variables X n : Ω R. We call (X n ) stationary if the distribution of the random vectors (X n,..., X n+j ) is independent of n for j = 1, 2,.... 2

49 Remark 2.1. The assumption of stationarity plays a role for the description of physical systems whose microscopic state fluctuates and whose macroscopic state does not change in time. Let (α n ) be a real sequence such that (2.1) P (A B) P (A) P (B) α n for all A σ(x 1,..., X k ), for all B σ(x k+n, X k+n+1,...) and for n, k = 1, 2,... where σ(x 1,..., X k ) denotes the σ-algebra generated by the random vector (X 1,..., X k ). The stochastic process is called α-mixing if (2.1) and (2.2) lim n α n = 0. hold for a sequence (α n ). Remark 2.2. α-mixing implies that the microstates of the system described by the stochastic process are nearly stochastically independent if they are observed at distant time points. The numbers α n satisfying (2.1) can be used for establishing bounds on the covariance of appropriate random variables. For a real random variable Y : Ω R define Y := sup Y (ω) ω Ω Lemma 2.3. Let the process (X n ) be α-mixing and a sequence (α n ) may satisfy (2.1) and (2.2). Let Y, Z : Ω R be two random variables where Y is σ(x 1..., X k )- and Z is σ(x k+n, X k+n+1,...)-measurable for some k and some n. (1) An upper bound for the covariance cov(y, Z) is given by cov(y, Z) 4 Y Z α n. (2) An upper bound for the covariance is given by where E denotes expectation. cov(y, Z) 8 (1 + E( Y 4 ) + E( Z 4 ) ) α 1/2 n 3

50 Proof: Lemma 2.3 is a reformulation of Lemma 2 and Lemma 3 in [2, p Let us now suppose that (X n ) is a stationary and α-mixing process and that X 1 is centered (E(X 1 ) = 0) and square-integrable; under these assumptions the expression cov(x n, X n+k ) is finite and independent of n for k = 0, 1,.... Put γ(k) := cov(x 1, X 1+k ) (k = 0, 1,...). The sequence ( γ(k) ) (X n ). k=0 is called the autocovariance function of the process Suppose that we would like to estimate γ(k) from the observations X 1, X 2,.... Since E(X n ) = 0 for n = 1, 2,..., a natural estimator is given by: for n = 1, 2,.... γ n (k) := 1 n k n k X j X j+k j=1 Note that estimator γ n (k) is unbiased for n = 1, 2,.... Now we formulate general (nonparametric) conditions that are sufficient for weak consistency of ( γ n (k)) n=1. Lemma 2.4. Suppose that the process (X n ) is stationary and α-mixing where E(X 1 ) = 0. Fix a nonnegative integer k and let us assume that (2.3) C := sup E ( X 1+l X 1+k+l 4) <. holds. Then l=0 γ n (k) γ(k) in probability for n. Proof: Due to Chebyshev inequality it suffices to prove that lim Var( γ n (k) ) = 0 n holds for the sequence of variances. Let (α n ) be a sequence satisfying (2.1) and (2.2). A standard reasoning using the stationarity of (X n ) implies that 4

51 Var ( γ n (k) ) 1 = (n k) n k Var(X 2 j X j+k ) + j=1 2 (n k) 2 1 i<j n k = 1 n k Var(X 1X 1+k ) + 2 (n k) 2 n k 1 l=1 1 n k Var(X 1X 1+k ) + 2 n k n k 1 l=1 cov(x i X i+k, X j X j+k ) (n k l) cov(x 1 X k, X 1+l X 1+k+l ) cov(x1 X 1+k, X 1+l X 1+k+l ). According to Lemma 2.3(2) we have cov(x1 X 1+k, X 1+l X 1+k+l ) 8 (1 + 2C) α 1/2 l k for l > k. This completes the proof in view of (2.2). Remark 2.5. The upper bounds for the variance of estimator γ n (k) presented in the proof of Lemma 2.4 depend on lag k; the proof suggests that sample size n required for estimating covariance γ(k) should be much higher than lag k: (2.4) n >> k. Figure 2.1 in [6 illustrates the consequences of violation of (2.4) in a more general context. 3. An Estimator for the Marginal Density Stationarity and α-mixing are mild conditions imposed on a stochastic process (X n ); they are also physically plausible in the context of modeling numerous phenomena (cf. Remarks 2.1 and 2.2). In the present section we propose 5

52 the kernel density estimator for a statistical exploration of the marginal distribution of a stationary and α-mixing process. The kernel density estimator is an approved nonparametric procedure for estimating the Lebesgue density of the distribution of observables; in our context it can be defined by f n (x) := 1 n h(n) n ( ) x Xn K h(n) j=1 (x R) for n = 1, 2,... where K : R R + is a Lebesque density of a probability distribution on the real line (kernel) and h(n) is a sequence of bandwidths. To motivate the application of f n for the statistical access to the thermal voltage signal we formulate and prove a weak consistency result for the kernel density estimator. Theorem 3.1. Let (X n ) be a stationary and α-mixing process where a sequence (α n ) satisfies (2.1) and (3.1) α n <. n=1 The distribution of X 1 may have a continuous and bounded Lebesgue density f : R R +. Let the kernel K be a bounded density of a probability measure. Put h(n) := c n β for n = 1, 2,... where c > 0 is a constant and 0 < β < 1/2 an exponent. Then f n (x) f(x) in probability for n for all x R. Proof: Fix an arbitrary x R. Obviously E ( fn (x) ) = 1 K h(n) R ( x ξ h(n) it follows that ( fn (x) ) is asymptotically unbiased: lim E( fn (x) ) = f(x) n 6 ) f(ξ)dξ;

53 (cf. Theorem 9.8 in [16). Therefore it remains to show that (3.2) lim n Var ( fn (x) ) = 0. holds. Now, standard computation applying the stationarity of (X n ) yields Var ( fn (x) ) n ( )) x Xj = = 1 (nh(n)) 2 + j=1 2 (nh(n)) 2 1 nh(n) 2 Var + ( Var K 1 i<j n ( K 2 (nh(n)) n 1 2 l=1 K 2 n h(n) + 2 n 1 2 nh(n) 2 h(n) ( ( ) ( )) x Xi x Xj cov K, K h(n) h(n) ( )) x X1 h(n) ( ( ) ( )) x X1 x X1+l (n l) cov K, K h(n) h(n) ( ) ( )) x (K cov X1 x X1+l, K. h(n) h(n) l=1 Application of Lemma 2.3(1) entails the inequality Var ( fn (x) ) K 2 nh(n) K 2 nh(n) 2 which in view of (3.1) proves (3.2). Example 3.2. (autoregressive process) Let (ε j ) j= be an i.i.d. sequence of random variables distributed according to the normal distribution with mean 0 and variance σ 2 > 0. Let 0 < a < 1 be a number. Put X n := a j ε n j j=0 for n Z where Z denotes the set of integers. Obviously, (X n ) n= is a stationary Gaussian process whose autocovariance function can be computed according to: γ(l) = cov(x n, X n+l ) 7 l=1 α l

54 = = a i a j E(ε n i ε n+l j ) i=0 j=0 a i a l+i σ 2 = σ2 a l 1 a 2 i=0 for n Z and l = 0, 1,.... It follows that (3.3) γ(l) = σ2 a l holds for l Z. density f of (X n ) is given by 1 a 2 The sumability of ( γ(l) ) l= implies that the spectral f(λ) := 1 2π k= γ(k) exp( ikλ) (λ [ π, π); standard calculation using (3.3) entails that f(λ) = σ 2 2π ( 1 2a cos λ + a 2) (λ [ π, π) holds. Theorem 5 on p. 67 in [15 implies now that the autoregressive process (X n ) satisfies the assumptions of Lemma 2.4 and of Theorem 3.1. Therefore estimator ( γ n ) defined in Section 2 admits consistent estimation of the autocovariance function γ and kernel estimator ( f n ) is weakly consistent for the marginal density of (X n ). 4. Equidistant Sampling from the Ornstein-Uhlenbeck Process Let (Y t ) t R be a centered and stationary Ornstein-Uhlenbeck process on a probability space (Ω, A, P ). This means that (4.1) E(Y t ) = 0 (t R) holds and that (4.2) Γ(s) := cov(y t, Y t+s ) = b exp( c s ) 8

55 is independent of t for all s R where b and c are positive constants (parameters of the process); process (Y t ) t R is Gaussian which implies that it is well defined by the covariance structure (4.2) and by the condition of continuity of paths, cf. [1. Let us now interpret (Y t ) t R as a model for a continuous time signal which is sampled at equidistant discrete time points k t, k Z by a digital device. Put X n := Y n t (n Z). Obviously, (X n ) n Z is a stationary and centered Gaussian process in discrete time; its autocovariance function is given by: (4.3) γ(l) = cov(x 0, X l ) = b exp( c t l ) (l Z) which corresponds to the autocovariance function (3.3) in Example 3.2. We conclude by Consistency Theorem of Daniell and Kolmogorov (cf. [1, Theorem 35.3) that the discrete extract (X n ) from (Y t ) is the autoregressive process presented in Section 3 where parameters a and σ in (3.3) can be adjusted to the parameters b and c of the Ornstein-Uhlenbeck process (Y t ) t R, cf. (4.3). 5. The Modified Drude Model for the Electronic Gas Let us consider a 3-dimensional container C which is modeled by C := [0, L [ w/2, +w/2 2 R 3 where L, w > 0 denote the edge lengths. We inject N mass points of mass m > 0 into C according to the uniform distribution. The initial velocities v (1) (0),..., v (N) (0) R 3 of the points are generated according to the centered Gaussian distribution N(0, σ 2 I 3 ) with mean 0 R 3 and covariance matrix σ 2 I 3 where I 3 denotes the 3 3-identity matrix; parameter σ can be interpreted thermally by ( ) 1/2 kb T (5.1) σ = m where k B = J/K and T > 0 denote Boltzmann constant and temperature of the system, respectively, cf. [12. 9

56 Let the system of mass points (gas) evolve according to the Newtonian dynamics entailing that the micro-constituents do not mutually interact and are reflected at the walls of container C at appropriate time points. The system can be interpreted as a kinetic model of the ideal gas (cf. [11, Section 2.3). This model can be implemented in a computer program enabling us to compute a trajectory (5.2) ( x (1) (t),..., x (N) (t); v (1) (t),..., v (N) (t) ) (t R + ) where x (j) (t) and v (j) (t) denotes the position and velocity vector of j th mass point at time t, respectively, j = 1,..., N. Fixing mass m = kg of the micro-constituents implies that the system can be viewed as a gas of electrons of charge e = C confined to a virtual conductor in the sense of a modified Drude model (cf. [13 and [3) in which the Coulomb repulsion between the electrons is neglected. The electric potential V of the electronic gas as function of position vector y and time t is given by (5.3) V (y, t) := e 4πε 0 N j=1 1 y x (j) (t) where ε 0 = As/Vm and. denotes the permittivity of vacuum and the Euclidean norm, respectively; cf. [7 and [14. Accordingly, the thermal voltage signal U between the ends of the conductor is given by (5.4) U(t) := V ( (L, 0, 0), t) V ((0, 0, 0), t ) ( t R + ). Example 5.1. A typical size of the electronic gas which can be reasonably processed on a contemporary computer, is N = Density ϱ of the electronic gas in copper is given by ϱ = m 3 cf. [4. This means that under the assumption that the virtual copper probe is a cube (L = w), the typical edge length is L = m which in our context justifies the term nanoconductor. 10

57 The Computer Experiment and its Outcome Our statistical access to voltage signal U in a virtual nanoconductor is based on the generation of a trajectory (5.2) in the phase space during a simulation experiment; in the course of the computational process the values U( t n), n = 0, 1,... are sampled where t = s is fixed. The nonparametric statistical procedures introduced in Sections 2 and 3 are applied to the sample. Let us specify the input data for the computer experiment. We fix the number N = 10 4 of electrons to be considered. Put L = w = m for the length and width of nanoconductor C and T = 300K for the temperature. The initial positions of the electrons are generated according to the uniform distribution over C and the initial velocity vectors according to Gaussian distribution N(0, σ 2 I 3 ) with parameter σ defined by (5.1). The Newtonian dynamics is imposed on the electronic gas whose microstate (5.2) evolves; during the experiment the thermal voltage values U( t n), n = 0, 1,..., M = 10 6 are stored. For the statistical analysis of the stored data we assume the nonparametric statistical model as specified in Sections 2 and 3. Figure 1: Observed voltage signal 11

58 Figure 1 shows a typical discrete sample of the voltage signal and confirms the appropriateness of our choice of t. Figure 2: Parametric and nonparametric density estimates The noisy curve in Figure 2 corresponds to the kernel density estimate of the marginal distribution of the thermal voltage introduced in Section 3 while the smooth curve represents the centered Gaussian distribution whose variance has been estimated by γ M (0), cf. Section 2. Figure 2 confirms the Gaussianity of the stochastic process modeling the voltage signal. Figure 3: Parametric fit to the nonparametric estimate 12

59 The noisy curve in Figure 3 is the nonparametric estimate of the autocovariance function of the thermal voltage observed in the course of the experiment. The smooth curve is the parametric fit (cf. (4.3)) to the nonparametric estimate. The diagram suggests that the parametric model approximates the nonparametric estimate by γ M ; the slight discrepancy between the curves can be interpreted as a consequence of statistical risk inherent in estimator γ M. The estimates of parameters b and c are b = V 2 and ĉ = s 1. They indicate a surprisingly high average thermal voltage in our virtual nanoconductor which can be approximated by the square root of b. Estimate ĉ can be interpreted as a typical frequency inherent in the signal; the obtained value is far beyond the capabilities of a contemporary oscilloscope, which makes the thermal voltage signal inaccessible for measuring devices. For a numerical illustration of ĉ we observe, however, that ĉ k B T holds where denotes Planck constant. = s 1 Summarizing it can be stated that the simulated thermal voltage signal is statistically identified as a trajectory of a stationary Ornstein-Uhlenbeck process. Although we cannot claim the exactness of the modified Drude model for the description of a real nanoconductor, we believe that the latter would be a challenging object for the experimental study of thermal noise phenomena. Acknowledgment The authors would like to thank Georg Pflug from Vienna and Hajo Leschke from Erlangen for valuable comments on the first draft of the present contribution. 13

60 References [1 H. Bauer, Probability Theory. De Gruyter, Berlin, New York (1996). [2 P. Billingsley, Probability and Measure. 3rd ed., Wiley, New York, Chichester (1995). [3 E. Grycko, W. Kirsch, M. Könenberg, J. Li, T. Mühlenbruch, J. Rentmeister, Thermal noise in a modified Drude Model. Int. J. Pure Appl. Math. 54, No. 4, (2009), [4 E. Grycko, W. Kirsch, T. Mühlenbruch, Amplification of thermal noise by an electrostatic field. Int. J. Pure Appl. Math. 60, No. 2, (2010), [5 E. Grycko, W. Kirsch, T. Mühlenbruch, Some quantum mechanical evidence for the amplification of thermal noise in an electrostatic field. Int. J. Pure Appl. Math. 69, No. 4, (2011), [6 P. Hall, P. Patil, Properties of nonparametric estimators of autocovariance for stationary random fields. Probab. Theory Relat. Fields 99, (1994), [7 J.D. Jackson, Classical Electrodynamics. 3rd. ed., John Wiley & Sons, New York, Chichester, Weinheim (1999). [8 E. Liebscher, Strong convergence of α-mixing random variables with application to density estimation. Stoch. Proc. Appl. 65, (1996), [9 E. Liebscher, Asymptotic normality of nonparametric estimators under α-mixing condition. Stat. Prob. Lett. 43, (1999), [10 E. Liebscher, Estimation of the density and the regression function under mixing conditions. Statistics & Decisions 19, (2001), [11 O. Moeschlin, E. Grycko, C. Pohl, F. Steinert, Experimental Stochastics. Springer-Verlag, Berlin, Heidelberg, New York (1998). [12 O. Moeschlin, E. Grycko, Experimental Stochastics in Physics. Springer- Verlag, Berlin, Heidelberg, New York (2006). [13 R. Müller, Rauschen. 2nd ed., Springer-Verlag, Berlin, Heidelberg, New York (1990). [14 W. Nolting: Elektrodynamik. 6th. ed. Springer-Verlag, Berlin, Heidelberg, New York (2002). 14

61 [15 M. Rosenblatt: Stationary Sequences and Random Fields. Birkhäuser, Boston, Basel, Stuttgart (1985). [16 R.L. Wheeden, A. Zygmund, Measure and Integral. Marcel Dekker, New York, Basel (1977). 15 Eingegangen am:

62 - 56 -

63 A Discrete Stochastic Growth Process by Conquering Boundaries Roger Böttcher Mathematics Subject Classification (2010). 26A18, 60G99, 60J20, 68U20. AMS Classification. Stochastic Process, Markov Chains, Iteration, Simulation. Abstract. We introduce a plain discrete stochastic growth process (X n ) n N0 based on moveable segments which hit randomly a larger one. Growing is caused by an overlapping of the fixed and the moveable segments which, in a certain sense, conquer free valences at the boundaries. This stochastic process can be modeled as a Markov Chain, which gives us the possibility to derive a lot of properties of this distinctly simple sort of growing. An iterative model based on a recursion formula yields the estimation O( n) E(X n ). With an heuristic explanation we discuss an upper boundary E(X n ) x T (n), where x T (n) = O( n) is derived from the mean values of transition times within the Markov Chain, i.e. there are strong evidence for E(X n ) = O( n). Introduction The world around us is teemed with growing whether it is the world of life or the world of inorganic processes like crystals and other chemical appearances. Hence, every model concerning such processes touches our world we are living in and may give us a new or additional prospect dealing and interacting with it. This may be the case even for the easiest type of such models of growing processes. In this sense we introduce in spatial extension as well as in time a discrete model where mobile segments I n, n N, hit successively and randomly a fixed segment K n 1 in one dimension, stick with it together and form in such a way a new fixed segment K n, which is once more an initial situation for an encounter of such a kind, see Fig. 1 for an exemplary illustration of this process of tossing and melting of one of the mobile segments I.... K I n 1 L n n 1 discrete time step randomly placement and melting L n K n Fig. 1: Discrete growing in one dimension: I n randomly and uniformly distributed on K n 1 Thus, in our model of one dimensional growing lies the possibility of expansion, so that the length of the segment K n is enlarged, i.e. L n > L n 1, or stagnation, i.e. L n = L n 1. Each step of growing is initiated by conquering the borders of former reached areas, and each of such a step lowers the tension or possibilities for another step of expansion, due to the mere fact, that the borders becomes more and more remote. Is such a growing comes to final stagnation? Or is this process in a stochastical mean growing for ever? Then, what is the order of growing? In the following chapters we will develop some answers of these questions.

64 Stochastic Growth Process 1 Random placement of exactly one segment 1.1 Discrete placement of one segment I on K At first we consider just the case of exactly one randomly placed segment I of length d N on a given segment K of length L = b a N 0, b a with a, b Z. The term on means the condition K I Ø (1.1) and the Fig. 2 helps to imagine this. If we consider I as well as K consists of mere points or knots, the condition (1.1) becomes very clear, see on the left in Fig. 2. Nevertheless in terms of growing the idea that I and K are created by blocks or bricks may more helpful. Mathematically of course both imaginations are equal and nothing but imaginations at all. Though this case of tossing exactly one segment on another one is nearly trivial, it gives us some inside of the process and furthermore it will create a good generator of an interesting iteratively based estimation of the growing process with consecutively placed segment I k, k N, in the next section. I d K K I! a b L d I K! a L b Fig. 2: Random placement of one segment I on K under the condition of K I Ø; on the left hand side we emphasize more the discrete character of the segments motivating K I = Ø even when I hits K only at one of its vertices, thus I and K have one point in common; the imagination on the right hand side focuses more on the length in a geometrical sense, i.e. the length function L The segment I = {ω, ω+1,..., ω+d} is moveable, i.e. the coordinate ω Z of its left vertex is a random variable and is assumed to be uniformly distributed in the set of possible events Ω := {a d, a d + 1,..., b 1, b} Z (1.2) due to the condition (1.1). The random variable X, describing the growth process of sticking both segments I and K together, now is the length of the union of I and K after this random encounter: X = L(K I). Summarizing this, we get the following definition. Definition 1.1. Let I and K be a moveable and a fixed segment, i.e. a set of consecutively points in Z, of length d N and L N 0, respectively, sticking under the condition (1.1) randomly together. Than the random variable X of growth is defined as X : Ω N, ω X(ω) := L ( K I ). Here L(S) means the length of a given segment S = {s 1,..., s n }, which is defined as the difference L(S) := s n s 1, i.e. in the discrete world this is the cardinality of the set minus one, thus we have alternatively L(S) := #S 1. The set Ω is the sample space (1.2) for the placement of I.

65 One Segment 3 In the following we fix K along the ξ-axes in such a way that a = 0 and b = L, without loss of generality. Hence we find immediately the following form for the values X assumes, depending on the length L of the fixed segment K: X : Ω N, ω X(ω) = L(K I) = max(k I) min(k I) = max({l, ω + d}) min({0, ω}) L ω, L d d ω < 0 L < d d ω < L d, L, L d 0 ω < L d, = d, L < d L d ω < 0, d + ω, L d L d ω L L < d 0 ω L. (1.3) The distribution of the random variable X can be grasped by this result: Theorem 1.2. For the probability distribution of the random variable X which is described in Def. 1.1 we have: 2, L d L < x L + d, P(X = x) = 1 L + d + 1 The expected value and the variance of X are: L d + 1, L d x = L, 2, L < d d < x d + L, d L + 1, L < d x = d, 0, else. (1.4) E(X) = L2 + L + L d + d + d 2 L d d (d + 1) = L + L + d + 1 =: Ψ(L, d) and (1.5) V(X) = (L d) 2 { d (d + 1)(1 + L + 2dL d 2 ), L d, L (L + 1)(1 + d + 2dL L 2 ), L < d. (1.6) Proof. The mass function P(X = x) in (1.4) becomes quite plausible due to the number of placements of the moveable segment of total length L(K I) = x. More formal, we can also count the events of (X = x) in (1.3) for x = L, L + 1,..., L + d. For the expected value and variance we have to examine the two cases (1) : L d and (2) : 0 L < d. The calculations are straight forward for both cases: (1) : E(X) = = (2) : E(X) = = L+d l P(X = l) = l=l 1 L + d ( L + d + 1 (L d + 1) L + 2 ( (L d + 1) L + 2 L d + 2 d+l l P(X = l) = l=d 1 L + d + 1 d(d + 1) ) 2 1 ( L + d + 1 (d L + 1) d + 2 ( (d L + 1) d + 2 L d + 2 L(L + 1) ) 2 d ν=1 ) (L + ν) = L2 + L + L d + d + d 2, L d L (d + ν) ) ν=1 = L2 + L + L d + d + d 2. L d

66 Stochastic Growth Process Hence we have for the expected value the same expression independent of the case distinction. The calculations for the variance are similar, here we have to deal with sums of squares. The second moments of X are (1) : E(X 2 ) = (2) : E(X 2 ) = L+d l 2 P(X = l) = l=l d+l l 2 P(X = l) = l=d 1 ( L + d + 1 (L d + 1) L = L d (d + 1)(6L + 2d + 1) L d 1 ( L + d + 1 (d L + 1) d = d L (L + 1)(6d + 2L + 1). L d d (L + ν) 2) ν=1 and L (d + ν) 2) With V(X) = E(X 2 ) E(X) 2 and E(X) due to the previous calculation we receive the two formulas (1.6) in the theorem. Remark: The part for 0 L < d in the function of the variance V(X)(L, d) is monotonically increasing with L by a fixed value of d, while the part for 0 < d L shows a conspicuous maximum in L = (d+1)(4d 1) 2d An excursion to the continuous world It is possible to derive the continuous pendant to this random placement of two segments using only the result for the discrete case. Let Λ R + 0 be the length of the fixed segment K and δ R + the length of the moveable segment I. We assume that the numbers Λ and δ are commensurable 1 Λ, i.e. δ Q. Then there exists a common measure, i.e. a real number µ > 0, so that Λ = L µ and δ = d µ with L, d N. Hence we can apply formula (1.5) to calculate E(X) with X in terms of the segments K and I forming by the lengths L and d as whole numbers. If we set W µ = µ X as the random variable for the growing segments in terms of the lengths Λ and δ we have E(W µ ) = µ E(X) = µ L2 + L + L d + d + d 2 L + d + 1 ν=1 = Λ 2 µ + Λ + Λ δ µ + δ + δ2 µ Λ µ + δ µ + 1. (1.7) This common measure is of course not unique, once found one of it, we have for every division by a whole number m > 0 another measure, i.e. ˆµ = 1 m µ. So, the formula (1.7) is also valid for E(Wˆµ ) = m Λ2 µ + Λ + m Λ δ µ + δ + m δ2 µ m Λ µ + m δ µ + 1 = and shows finally the limit Λ 2 µ + Λ m + Λ δ µ + δ m + δ2 µ Λ µ + δ µ + 1 m E(W ) := lim m E(W µ m ) = Λ2 µ + Λ δ µ + δ2 µ Λ µ + δ µ = Λ2 + Λ δ + δ 2 Λ + δ 1 The final results holds also for arbitrary lengths Λ and δ, but we can t use (1.5) directly.

67 Markov chain 5 which reveals the expression of the expected value for randomly placed segments in continuous lengths Λ and δ independent of the initial measure. Remark I: It is easy to verify the chain of inequalities E(W ) < E(Wˆµ ) < E(W µ ) with ˆµ < µ. Hence, if we have two segments K and I for which the latter is continuously moveable, we achieve the smallest expected value in the experiment of random placement, in contrast to a coarser division which yields in average larger sizes of the sticking segments. Remark II: The same procedure is applicable to the variance where we get { V(W ) = 1 1 (2Λ δ) δ 3, Λ δ, 3 (Λ + δ) 2 (2δ Λ) Λ 3, Λ < δ for the continuous lengths Λ and δ with a maximum in Λ = 2δ. 2 The stochastic growth process as a Markov chain Now we come to the process (X n ) n N0 of consecutively dropped moveable segments I k beginning with the degenerated fixed segment K 0 := {0} of length zero: K k := K k 1 I k with K k 1 I k Ø for k = 1, 2,..., n, (2.1) which can be modeled as a Markov chain. Theorem 2.1. The stochastic process (X n ) n N0 of a discrete growing segment is a homogeneous Markov chain satisfying the Markov condition P(X n+1 = j X n = i) = P(X n+1 = j X 0 = i 0, X 1 = i 1,..., X n = i) = p ij (2.2) with the transition probabilities p ij = 1 i + d + 1 2, i d i < j i + d, i d + 1, i d j = i, 2, i < d d < j d + i, d i + 1, i < d j = d, 0, else. (2.3) Proof. This is the first occasion where we get some help of the previous chapter: The state (X n+1 = j X n = i) and the conditional probability P(X n+1 = j X n = i) = p ij, respectively, is obviously the case of exactly one randomly placed segment I on a fixed segment K of length L = i, so that X = L(K I) = j and hence p ij becomes (1.4) with L = i and x = j. Especially the outcome of X n+1 depends only on the outcome of the directly preceding trial X n and only on it. The mass function of X n is denoted by µ (n) for a row vector, i.e. we have µ (n) i = P(X n = i) for its components with i N 0. The transition probabilities are arranged in the transition matrix P = (p ij ) 0 i,j of infinite dimension. Nevertheless both of it, mass function as well as transition matrix, are limited in a practical sense: n segments I k, k = 1,..., n, can maximally be lined up to a length of n d. Hence, it is sufficient so consider in all

68 Stochastic Growth Process calculations a vector µ (n) of length nd + 1 and a matrix P of size (nd + 1) (nd + 1). The addition of one is caused by the fact that we indicate the objects beginning at zero and X 0 := 0 by definition and certainty, i.e. µ (0) = (1, 0, 0,...). With this initial condition we have according to the Chapman-Kolmogorov equation µ (n) = µ (0) P n, (2.4) see [9, for the mass function of X n after placing n segments randomly in the way which is described above. To see this in action we look for the following example. Example 2.2. For n = 5 randomly placed segments I k, k = 1, 2,..., n, of length d = 3 we derive the mass function of X n. The transition probabilities (2.3) yields the matrix P = which is generated up to a dimension of nd + 1 = 16. (Note that the first row is the mass function of X 1 = d after one placed segment and of course we have p i=0,j=d = 1.) Raising P to the power of n = 5 and then left handed multiplied by µ (0) = (1, 0, 0,...), we get the mass function µ (5) = µ (0) P 5 = ( 0, 0, 0, , , , , , , , , , , , , 1 910, 0,...) which is the first line of P 5. Fig. 3 shows a simulation, see section 4, and the graph of the mass function µ (5) µ i Fig. 3: On the left: Monte Carlo simulation for d = 3 and n = 5 with outcome X 5 = 9, on the right: mass function of X 5 (blue dots) and relative frequencies of 10 4 simulations (red line)

69 Markov chain 7 It is not very convenient to generate the distribution µ (n) via the power of a generally large matrix P. In the case of d = 1, or to say it in terms of the Remark II of section 1.2, for the most coarse measure we are able to solve the Chapman-Kolmogorov equation for the initial distribution µ (0) = (1, 0, 0,...) analytically. This can be achieved in usage of the unilateral z-transform µ (n) Z M(z) short for Z{µ (n) } = µ (k) z k = M(z) which transforms a sequence µ (n) into a complex function 2 M(z), see [8. In our case the sequence is defined by the recurrence µ (n+1) = µ (n) P. Applying the property of time shifting ( Z{µ (n+ν) } = z ν M(z) µ (k) z k) to it yields (the matrix P is just a constant factor over the transformation) k=0 k=0 µ (n+1) = µ (n) P Z ( z M(z) µ (0) z 0) = M(z)P. Eventually the expression on the right hand side in the z-domain can be rearranged and via inverse z-transform moved back to the space of sequences M(z) = µ (0) ( z I P ) 1 z Z 1 µ (n) = µ (0) Z 1{( z I P ) 1 } z, }{{} P n where I is the identity matrix of sufficient dimension. So we achieve our final task to calculate the n-th power of the transition matrix by which leads to the following result. P n = Z 1 {(zi P) 1 z} (2.5) Theorem 2.3. Let I k, k = 1,..., n N, randomly placed segments of length d = 1 which form successively a growing segment K n according to the condition (2.1). For the mass function p i (n) = P(X n = i), i 0, of the random variable X n = L(K n ) we have p i (n) = i + 2 i! i ( ) i ( 1) i k (k + 2) i 1( k ) n. (2.6) k k + 2 k=1 Proof. The Chapman-Kolmogorov equation µ (n) = µ (0) P n elucidates due to the initial distribution µ (0) = (1, 0, 0,...) that the mass function µ (n) = (p 1 (n), p 2 (n),..., p i (n),...) is the first row of the n-th power of the matrix P = (p ij ) 0 i,j n of transition probabilities. We emphasize that the indication of the matrix P begins with zero for its rows as well as columns. 2 The sequence is indeed over n; hence, we consider a z-transform of vector sequences, since µ (n) consists itself of a sequence and furthermore we use the bold notation M for a vector function in z. Thus, its components are M i(z) = k=0 µ(k) i z k. All these vectors are developed to a sufficient length or can be considered of infinite dimension which are phased out by zeros.

70 Stochastic Growth Process According to equation (2.5) we will find this first line by the inverse z-transform of the matrix function (zi P) 1 z. Herein the transition matrix P as well as the identity matrix I is to be developed just to a size of (n + 1) (n + 1). Since we have the starting point K 0 of length zero and the final segment K n can not be longer than n, due to d = 1. The matrices we have to cope with are of the following structure: the matrix function z z z zi P = z z z in the z-domain has to be inverted. For the first few elements this yields the following rational fractions (zi P) 1 = 1 ( z =.. 1 z z 1 3 ) z 2 3 z z 1 3 )( ( z 2 4 ( z 2 4 )( z 2 4 ) z ) ( )( z 3 5 ( )( z 3 5 ( z z z 2 4. )( ) z 1 3 z )( ) z z 2 4 )( z ).. ( )( z 4 6 z 3 5 ( )( z )( 15 z )( 15 z 3 5 z z 3 5 ( )( z 4 6 ( z 4 6 )( ) z 1 3 z )( ) z 1 3 )( ) z z 3 5 )(.. ) Herein the first row is to multiply by z and subsequently transformed back into the domain of sequences. Now it is our aim to construct this inverse matrix function in general form. Due to the banded structure of zi P =: (a ij ) 0 i,j n where a ii = z i i + 2, a i,i+1 = 2 i + 2 are the only components in row i different form zero, compare the p ij according to (2.3) in Theorem 2.1, we realize that the demanded inverse is an upper triangular matrix, i.e. with (zi P) 1 =: (q ij ) 0 i,j n the lower triangular part (q ij ) 0 < j < i < n = (0) vanishes. The diagonal elements q ii are easiliy to compute, since = a ii q ii = (z i i+2 ) q 1 ii delivers q ii = z i i+2, i = 0, 1,..., n. This principle of looking for successive equations in the product (a ik )(q kj ) = I leads us now to the components directly indicated above the diagonal elements. Hence, for the general column j i and i > 0 it follows from 0 = a i 1,i 1 q i 1,j + a i 1,i q ij that q i 1,j = a 2 i 1,i q ij i+1 = q ij a i 1,i 1 z i 1, i+1

71 Markov chain 9 i.e. especially for j = i we have And consistently it follows from q i 1,i = (z i i i+1 )(z i 1 i+1 ). 0 = a i 2,i 2 q i 2,j + a i 2,i 1 q i 1,j that q i 2,j = a 2 i 2,i 1 q i 1,j i = q i 1,j a i 2,i 2 z i 2, i so, furthermore for j = i it is q i 2,i = (z i i i+1 2 i )(z i 1 i+1 )(z i 2 i ). Pursuing this principle we gain in general for 1 ν i the product q i ν,i = 1 z i i+2 2 i+1 z i 1 i+1 2 i z i 2 i 2 i 1 z i 3 i 1 z 2 i ν+2 i ν i ν+2 In this way, beginning with the diagonal element q ii, we reach our final destination q 0,i = 1 z = 1 z i i+2 ( i 1 k=1 2 i+1 z i 1 i+1 ) 2 k i z i 2 i ( i k=1 z 2 i 1 z i 3 i 1 ) 1 k k+2 z 2 i ν+2 i ν i ν z z (2.7) lying in the same column just above the diagonal element and in the very first row! To transform this functions of the z-domain back to the space of sequences we write this product in form of partial fractions, i.e. with Q i := i k=1 q 0,i = 1 z 1 z µ k = i k=1 2 i (i + 1)! Q i r k z µ k and µ k := k k + 2. The numerators r k occurring in the sum are eventually found by r k = Q i (z µ k ) z=µk = = = (k + 2)i 1 2 i 1 i ν=1 ν k ν + 2 k ν i ν=1 ν k 1 µ k µ ν = (k + 2)i 2 2 i k k ( 1) i k (i + 2)! (k 1)! (i k)! i ν=1 ν k 1 k k+2 ν ν+2 = i ν=1 ν k (k + 2)(ν + 2) 2(k ν) (k + 2)i 1 = 2 i k+2 k+2... (i + 2) (k 1)(k 2)... 1 ( 1)( 2)... (k i) = (i + 2)! 2 i i! ( 1) i k (k + 2) i 2 k ( ) i. k

72 Stochastic Growth Process Transforming this back to the domain of sequences, see [8, we get for n 1 the correspondence z q 0,i = 2 i (i + 1)! i k=1 r k z µ k Z 1 2 i (i + 1)! i k=1 r k µ n 1 k = p i (n). Thus, finally we arrive at the first row of P n containing all the probabilities p i (n) of the mass function of X n. (Annotation: We excluded n = 0 to avoid the complicated usage of delta and step functions which only would blur the clearness at this place.) With the expressions above for r k and µ k this is exactly the representation (2.6). Remark I: It is n i=0 p i(n) = 1 as well as p i (n) = 0 for i > n. Remark II: We emphasize once more that the mass function (2.6) is only valid for n > 0. For n = 0 it is naturally P(X 0 = 0) = p 0 (0) = 1, which is beyond of (2.6), since our assumption is to place at least one random segment I. However we have p 0 (n > 0) = 0. Remark III: The probability p n (n) can be calculated irrespectively of (2.6). To receive a length of X n = n each of the tossing segments I k, 1 k n, has to fall exactly on one of the two corners of the segments which are grown until then. Hence, the product at the left is equal to the sum according to (2.6): p n (n) = n 1 k=1 2 k + 2 = 2 n Γ(n + 2) = ( 1)n n + 2 n! n ( ) n k ( 1) k n k k + 2. Using Γ(n + 2) = (n + 1)! = n! (n + 1) we receive this nice expression n ( ) n k ( 1) n (n + 1)(n + 2) ( 1) k n k k + 2 = 2n. k=1 The diagram in Fig. 4 shows some mass functions for different values of n, the number of randomly placed segments I k, 1 k n, creating in unity a randomly growing segment in its entirety. k=1 Fig. 4: Some mass functions p i (n) of X n with d = 1 for n = 5, 10, 20, 30, 40, and pi n = i For the expected value we attain due to the last theorem immediately this result: Corollary 2.4. If n > 0 randomly placed segments I k, k = 1,..., n N, of length d = 1 form successively a growing segment K n according to (2.1) with the initial condition K 0 = 0, then the mean of its length is the expected value of the random variable X n : E(X n ) = n i p i (n) = i=0 n i i=1 k=1 i + 2 (i 1)! ( 1)i k ( ) i (k + 2) i 1( k ) n. (2.8) k k + 2

73 Markov chain 11 For up to one hundred random segments the diagram in Fig. 5 shows the expected value of the length K n on account of formula (2.8). Nevertheless, for a larger number n of segments this double sum is not really con- 20 IE Xn venient in computations. So we return once more to the beginning of this section, 15 i.e. to recurrences this times di- rectly for the expected value E(X n ) for 10 arbitrarily large segments d 1. We discovered that the distribution µ (n) 5 = (p 0 (n), p 1 (n),...) for the total length n X n of the growing segment K n follows the Chapman Kolmogorov equation Fig. 5: d = 1, E(X n ) for n = 1 to 100 µ (n) = µ (0) P n. On the basis of the preceding distribution µ (n 1) this equation can be formulated as a recursive one µ (n) = µ (0) P n = (µ (0) P n 1 ) P = µ (n 1) P. And furthermore with the now introduced vector e (0) := ( 0, 1, 2,..., n ) T it is possible to extend this recurrence for the expected values, since we have n E(X n ) = p i (n) i = µ (n) e (0) = µ (0) P n e (0) = µ (0) e (n) i=0 with e (n) := P n e (0). Analogous to the mass function we attain the recursive formula e (k+1) = P e (k), k = 0, 1, 2,..., n 1. In its components the vector e (k) = (e (k) 0, e(k) 1,..., e(k) n ) contains the expected values of X i,k, which denote the random variable of a growing segment starting with the length i after k randomly placed segments. Especially for our geometric process beginning at a segment of vanishing length we have E(X ν ) = e (ν) 0, ν = 0, 1, 2,..., n. Due to the sparsity of the matrix P we need not a full matrix multiplication, it is just sufficient to consider only the not vanishing entries of the matrix and to formulate a mere summation: Algorithm 2.5 (Expected Value). A recursive procedure for the computation of E(X n ) is the following instructions for Mathematica 3 : input: (* procedure *) n = number of segments, while k < n d = length of segments for i 0 : n d + 1 (* initialisation *) e1[[i + 1 = i+d j=i p[i, j, d e[[j + 1; e = Table[i, {i, 0, (n + 1) d + 1}; end e1 = 0 e; e e1; k k + 1; k 0; E Append[E, {k, e[[1}; E {{k, 0}}; end 3 It is necessary to pay attention to the fact, that indications in Mathematica are not beginning with zero but with one. Thus, the components of e are shifted in the form i+1 and j+1, respectively.

74 Stochastic Growth Process Within the procedure we apply the external defined function, see (2.3) in Theorem 2.1: p [ i_, j_, d_ := 1 i+d+1 if [i d && i < j && j i + d, 2, if [i d && j == i, i d + 1, if [i < d && d < j && j d + i, 2 if [i < d && j == d, d i + 1, 0 (Annotation to the Algorithm 2.5: The sum in the procedure above is indicated from i to i + d with i = 0 to n d + 1. Thus, we have to deal for e up to a dimension of (n + 1) d + 1, compare the first part of the initialisation, though this places are meaningless for the values of components of e with indices larger than n d + 1). The graphs in Fig. 6 are computed with the procedure in Algorithm 2.5. On the left we have a fixed length d, the graphs are lines with i as parameter. On the right the graphs show with parameter d the first component, i.e. E(X n ) for i = 0, of the vector e (n) i IE X i,n i=10 i= n IE X n d= n Fig. 6: On the left: expected values E(X i,n ) for the random variable X i,n which is the total length of n randomly placed segments I k, k = 1,..., n, according to (2.1) of length d = 3 on an initial segment K 0 of length i; on the right: expected value E(X n ) for different lengths d 3 Exploring the order O(E(X n )) First we try to explore the order of the growing process with a heuristic approach. To motivate that method we come back to the case of exactly one randomly dropped segment I on a given segment K of length L, see Theorem 1.2 and the expected value (1.5) for this random variable. This formula can also be read in terms of a conditional expected value: E(X k+1 ) = Ψ(L, d) = E(X k+1 X k = L), i.e. it reads as one further segment I k+1 is dropped on a just grown segment K k with mean length L consisting of k segments I i, i = 1,..., k, which have been already tossed. Now, let us assume that this length is in the mean the expected value, thus we make the assumption L! = E(X k ). Then, the formula above becomes E(X k+1 )! E(X k+1 X k! = E(X k )) = Ψ(E(X k ), d), (3.1)

75 Exploring the order O(E(X n )) 13 which is a recurrence with recursion formula Ψ(, d). After this geometrical explanation we try to see the approximation (3.1) from another direction and to reveal more the technical aspect of this assumption. Therefore, we start with the exact relation E(X k+1 ) = E(E(X k+1 X k )) = E(Ψ(X k, d)), and now due to Ψ(E(X k ), d) = E(X k+1 X k = E(X k )), the heuristic (3.1) above is unraveling as an assumption about the exchange of E and Ψ, i.e. to assert E(Ψ(X k, d))! Ψ(E(X k ), d). We will come back to this viewpoint, pursuing an appropriate relation between this two functions in the last section of this chapter. Now, for the beginning, we examine the heuristic (3.1) and explore the related recurrence as a subject for itself. 3.1 Two recurrence relations It is advisable to introduce a relative length L/d in the recurrence, i.e. the length L of growing segments is all the times related to the length d of the tossing segments. In this sense we have due to the formula (1.5) for the expected value E(X) this expression Ψ(L, d) d = 1 d L2 + L + L d + d + d 2 L d = L d + ρ 1 + d with ρ := + ρ d L d = 1 + d 1, thus, for d 1 it is 1 < ρ 2. The last expression guided us to the function ψ : R + 0 R, t ψ(t) := t2 + t ρ + ρ t + ρ = t + ρ t + ρ. (3.2) Hence, the relationship between absolute and relative lengths is Ψ(L, d) = d ψ( L d ). (3.3) To create an upper limit in the following discussion, we introduce additionally the auxiliary function r : R + R, t r(t) := t2 + ρ = t + ρ t t. (3.4) Thus, with the following difference equations λ k+1 = λ k + ρ λ k + ρ = ψ(λ k), k 0, and y k+1 = y k + ρ y k = r(y k ), k 1, (3.5) with the initial conditions λ 0 = 0 and y 1 = ρ, respectively, we have two recurrence relations to approximate E(X k ) according to the heuristic (3.1).

76 Stochastic Growth Process 3.2 Estimations of O(λ n ) Our first aim is to show that the sequences (λ n ) and (y n ) are of the same order. Thus, afterwards it will become sufficient to concentrate on the much easier sequence (y n ). First we proof the following basic properties, using the notation (λ n ) and (y n ) for the sequences (λ n ) n N0 and (y n ) n N with λ 0 := 0 and y 1 := ρ, respectively, according to the definitions (3.5) throughout the whole chapter. Lemma 3.1. The sequence (y n ) is monotonously growing with y n > 0 for all n N. Proof. Since y 1 = ρ > 0, we have on the basis of y n > 0 by induction y n+1 = y n + ρ y n > 0, i.e. y n+1 > 0. Thus, furthermore it is y n+1 = y n + ρ y n > y n. Lemma 3.2. The sequences (λ n ) and (y n ) are linked together by the equation y n = λ n 1 + ρ, n > 0. (3.6) Proof. Between the functions ψ and r we have the relation: ρ ψ(t ρ) + ρ = (t ρ) + (t ρ) + ρ + ρ = t + ρ = r(t). (*) t Therefore the following induction succeeded. Obviously we have y 1 = ρ = λ 0 + ρ. Now, on basis of the proposition (3.6) we conclude due to the relation (*) the equation λ n + ρ = ψ(λ n 1 ) + ρ = ψ(y n ρ) + ρ = r(y n ) = y n+1, thus, the induction step y n+1 = λ n + ρ is proofed. Corollary 3.3. The sequence (λ n ) is monotonously growing with λ n 0 for all n N 0. Proof. The assertion holds due to Lem. 3.2 and the equation (3.6). Theorem 3.4. For λ n and y n, n > 0, see (3.5), we have this chain of inequalities: 1 λ n y n λ n + ρ and ρ y n n + ρ 1 as well as y ν ν for ν 5. (3.7) Proof. The chain is performed from the left to the right. (1) The relation 1 λ n, y n, n > 0, is shown by induction. First it is 1 λ 1 = 1 and 1 y 1 = ρ. The induction step is λ n+1 = λ n + ρ λ λ n+ρ n 1. Analogously we have y n+1 = y n + ρ y n y n 1. (2) On account of Lem. 3.2 it is y n+1 = y n + ρ y n = λ n + ρ. Converting this relation yields λ n = y n ρ (1 1 y n ) y n, since we have 1 y n and thus 1 1 y n 0 due to (1). (3) Analogously to (2) we convert y n+1 = y n + ρ y n = λ n + ρ to the inequality y n = λ n + ρ (1 1 y n ) λ n + ρ 1, because once more 1 1 y n 1 due to y n > 0. (4) The inequality ρ y n holds, since (y n ) is monotone, see Lem. 3.2, and y 1 = ρ. (5) The inequality y n n + ρ 1 is shown by induction. First it is y 1 = ρ ρ. Due to the just proofed relation (4) ρ y n and the recurrence (3.5) we have the induction step y n+1 = y n + ρ y n n + ρ = (n + 1) + ρ 1. (6) It remains to show y ν < ν for ν 5, which is done once more by induction. First, the relation 1 ρ 2 is monotonously mapped to 3.2 < y < 4.7 < 5. Applying (4) ρ y ν, the induction step performs y ν+1 = y ν + ρ y ν ν + 1.

77 Exploring the order O(E(X n )) 15 Corollary 3.5. The sequences (λ n ) and (y n ) are of the same order: O(λ n ) = O(y n ). Proof. Using the equation (3.6) in Lem. 3.2 as well as the recurrence (3.5) we have for n > 0 the fraction λ n y n = y n+1 ρ = y n+1 ρ = 1 + ρ y n y n y n yn 2 ρ y n Since (y n ) is monotonously growing, see Lem. 3.1, this gives thus we have O(λ n ) = O(y n ). 0 lim n λ n = 1 <, y n = 1 ρ ( 1 1 ). y n y n So, we arrived at our first aim. The next step is to examine the behavior of the sequence (y n ), which is the content of the following two theorems. Theorem 3.6. The sequences (λ n ) and (y n ) are unbounded. Proof. Due to the inequality (3.7) it is sufficient to prove this behavior for the sequence 1 (y n ). On the basis of the recurrence (3.5) and since 1 y n n, i.e. y n 1 n, for n 5 due to (3.7), we get by successive substitution y n+1 = y n + ρ y n = y n 1 + ρ y n 1 + ρ y n =... = ρ + ρ y ρ y 4 + ρ y ρ y n ρ ( 1 y y 4 + H n H 4 ) ρ ( H n ) = ρ (H n 3 4 ) with H n = n ν=1 ν 1 for the n-th harmonic number. Thus, the unbounded behaviour of (y n ) corresponds to the same property of the sequence of harmonic numbers. Theorem 3.7. The sequences (y n ) n>0 and (λ n ) n>0 are of the order O( n). Proof. Squaring the recurrence (3.5) initiate yk+1 2 = 2ρ + y2 k + ρ2 yk 2 substitution of yk 2 within this formula we get the sum and by successive y 2 k+1 = 2ρ + y2 k + ρ2 y 2 k = 2ρ + 2ρ + yk ρ2 + ρ2 yk 1 2 yk 2 = 2ρ(k + 1) 2ρ + ρ 2 + ρ 2 Σ k ν=1 1, yν 2 =... = 2kρ + y1 2 + ρ ρ2 y1 2 yk 2 i.e. and thus furthermore we have the estimation yn 2 = 2ρn + ρ 2 2ρ + ρ 2 Σ n 1 ν=1 1, n > 0, (3.8) yν 2 ρ n 2ρn + ρ 2 2ρ y 2 n 2ρn + ρ 2 2ρ + ρ 2 (n 1) ρ(2 + ρ) n (3.9) since it is yν 2 conclude 1, ν 1, see inequality (3.7), as well as 1 ρ 2. Due to y n > 0 we can n yn 2 2 n for all n 1. For the sequence (λ n ) the results follows immediately from Corollary 3.5.

78 Stochastic Growth Process Although sufficient for the estimation of the order O( n), this is a rather rough approximation. In this part for investigating the sequences (λ n ) and (y n ) it will be our last step to create a quite better estimation. Theorem 3.8. For the sequence (λ n ) n 0 we have the approximation Θ l < λ n < Θ u with the following functions of ρ and n for a least and an upper bound, respectively: Θ l (ρ, n) = 2ρ n + ρ ln(n + 1) 1 4 ln(ρ) ρ, Θ u (ρ, n) = 2ρ n + ρ ln(n + 1) 1 2 ln( ρ ρ+2 ) ρ. Proof. Related to the last result and proof, respectively, see equation (3.8), we have the representation k ( ρ ) 2. yk+1 2 = 2kρ + ρ2 + (**) y ν This is exactly the square with the length y k+1 of its edges, which is depicted on the right hand side in Fig. 7. The partition in smaller squares and rectangles is due to the iteration showing on the left. ν=1 Fig. 7: Recurrence y k+1 = r(y k ) (on the left) and partition of the square y 2 k+1 (on the right), here for a length of d = r y k k y k y k+1 ρ ρ r ρ y k r* ρ ρ ρ First we have the pale square of area ρ 2. Then, the sum ρ 2 /y ν 2, from ν = 1 to k, are the squares along the diagonal, since (y ν+1 y ν ) 2 = ( y ν + ρ y ν y ν ) 2 = ( ρ y ν ) 2, 1 ν k. The corners of these squares, not lying on the diagonal, are placed along the graph of r and its reflected image r with the diagonal for axis of symmetry. The last parts are the rectangles, each of them are of the area y ν (y ν+1 y ν ) = y ν (y ν + ρ y ν y ν ) = ρ which is in summary 2kρ. We estimate the sum of the squares along the diagonal, excluded the first one, with this two borders I k k ( ρ ) 2 2Ik y ν ν=1

79 Exploring the order O(E(X n )) 17 with I k := yk+1 y 1 yk+1 (r(t) t) dt = ρ y 1 dt t = ρ (ln y k+1 ln ρ). That means we underestimate the sum by a mere I k, since half of a square (y ν+1 y ν ) 2 exceeds the area of the wedge with the sides y ν+1 y ν and y ν+2 y ν+1, both perpendicular to each other, and r(t), with y ν t y ν+1 ; on the other hand side the whole tube of 2I k clearly overestimates the area of squares inside the curves r and r. Finally for y k+1 in I k we use the inequality (3.9), so that we have ρ[ 1 2 ln ( ρ (k + 1) ) ln ρ I k k ν=1 ( ρ ) 2 [ 2Ik 2ρ 1 y 2 ln ( ρ(2 + ρ) (k + 1) ) ln ρ. ν With the introduced equation (**) and the linkage λ n = y n+1 ρ in (3.6), see Lemma 3.2, we receive eventually the borders Θ l and Θ u. 3.3 The recurrence (λ n ) as a lower bound of E(X n ) As already explained in the first section, we have E(X) = Ψ(L, d) according to (1.5) for one randomly placed segment on another fixed one of length L and furthermore with the definition Ψ n (L) := Ψ(Ψ n 1 (L), d), n N, with Ψ 0 (L) := L, (3.10) for the initiating length of L = 0 a connection to the sequence (λ n ), since the relation (3.3) says d λ n = d ψ(λ n 1 ) = Ψ(d λ n 1, d) = Ψ(Ψ n 1 (0), d) = Ψ n (0). (3.11) Thus, we come back to the beginning of this section, connecting the function Ψ to the expected value E(X n ). Now, we can proof the following estimation: accord- Theorem 3.9. With the just defined recurrence (3.10) and the sequence (λ n ) n N0 ing to (3.5), respectively, we have the inequality d λ n = Ψ n (0) E(X n ). (3.12) That means especially, the stochastic process (X n ) of successively dropped segments of length d, starting with a fixed segment of length L = 0, is unbounded. Proof. We have by induction for n = 0 the relation Ψ 0 (0) = 0 = E(X 0 ). (Annotation: For n = 1 as well as for n = 2 with Ψ 1 (0) = d = E(X 1 ) and Ψ 2 (0) = d 2+3d 1+2d = E(X 2), respectively, even the equality holds.) Since Ψ(, d) is monotone, the inequality according to the proposition Ψ n (0) E(X n ) remains under the recurrence Ψ n+1 (0) = Ψ(Ψ n (0), d) Ψ(E(X n ), d). (*) Beside monotony the function Ψ(, d) is furthermore convex. Hence, on the basis of the conditional expected value Ψ(X n, d) = E(X n+1 X n ), (**)

80 Stochastic Growth Process we can apply Jensen s inequality, see [9, to conclude Ψ n+1 (0) Ψ(E(X n ), d) E(Ψ(X n, d)) (on account of Jensen) = E(E(X n+1 X n )) (due to relation (**)) = E(X n+1 ), which proofs the inequality (3.12). Since the recurrences λ n and Ψ n (0) are related according to (3.11), Theorem 3.6 implies that E(X n ) is unbounded. As a consequence of the last Theorem we have, that E(X n ) is at least of the order O( (n). Since n d is a rather improbable, nevertheless upper bound, we have O( n) E(X n ) O(n). How well the estimation d λ = Ψ n (0) works, is depicted in the graphs of Fig. 8. Additionally the Table 1 presents some numerical computations. Fig. 8: Boundaries, iterations and expected value, from top to bottom: 2 2 n (green) according to Theorem 3.7, boundaries (red) and sequence y n (blue, dotted lines) related to Theorem 3.8, expected values E(X n ) (black dots) see corollary 2.4, boundaries Θ l,u (red) and sequence λ n (blue) related to Theorem 3.8, n (green) according to Theorem y n IEX n λ n n Table 1: d λ k E(X k ), and d y k for d = 1, i.e. ρ = 2, and some k up to n = 100 (round numbers) k λ k E(X k ) y k Simulation of growing segments 4.1 Algorithm The stochastic process of a growing segment can easily be simulated. With a starting interval K 0 of length zero we need the number n of segments as well as their lengths d for input. The following algorithm is designed according to Fig. 2: the values a and b, respectively, indicate the borders of the currently grown segment K k, ω is the uniformly distributed random variable for the coordinate of the left corner of the next placed segment I k+1, so that K k I k+1 Ø is accomplished.

81 Simulation 19 Algorithm 4.1 (Growing Segment). The simulation of a growing segment K by n > 0 successively and randomly placed segments I k, k = 1,..., n, of length d N, i.e. the simulation of the stochastic process (X n ), can be achieved by the following algorithm: input: d, n a := 0; b := 0 for k 1 : n end ω = integer[(b a + d + 1) random + a d a min(a, ω); b max(b, ω + d) L n b a The value of random is a uniformly distributed random number between 0 and 1. The final length L n represents the value of the random variable X n. The Fig. 9 shows two simulations due to Algorithm 4.1, which here is enriched with some graphical features. We have two outcomes of n = 60 randomly placed segments of length d = 3. For better observation the segments are historically positioned, i.e. the vertical axis denotes the time of the placed segments. As an envelope we draw the curves (± d 2 Θ l(ρ, t), t) according to Theorem 3.8 with t [1, n, d = 3, and therefore ρ = 4 3. Fig. 9: Simulations with n = 60 random segments, on the left: L 60 = 36, on the right: L 60 = 35 To get a slightly more abstract but much more efficient overview about the development of the length X n, in the diagram of Fig. 10 the successively recorded length L n segment for segment of three different simulations is rendered, together with the estimation d Θ l (ρ, n). 40 L n n Fig. 10: The paths L n of three different simulations (each one refreshed started) of successively tossed segments from n = 1 up to 100 according to Algorithm 4.1, length of segments d = 3 (paths in different shades of green), as well as the estimation d Θ l (ρ, n) (in blue) 4.2 Distribution Beside the total length L n, after n segments has been tossed, also the relative frequencies of this length was recorded for comparison with the mass function of X n. The Fig. 11 and 12 show a small overview of the excellent agreement between analytical results and simulations.

82 Stochastic Growth Process Fig. 11: Mass function (blue) and relative frequencies (orange) of trials for n = 5 randomly tossed segments of length d = 3, the mass function is according to Example IPX i 0.4 IPX n 0.4 IPX n i i Fig. 12: d = 1: mass function (blue) according to (2.6) in Theorem 2.3 and relative frequencies (orange) of 5000 trials; on the left for n = 10, and on the right for n = 20 randomly segments 5 Transition and sojourn time Finally we calculate some characteristics according to [5 of the Markov chain describing the growth process (X n ) that state-transition diagram can be seen in Fig. 13. Since p ii < 1 and as a consequent of the geometrical structure there are only transient states. p d, d pii 0 d d d 2d+1... i... j... j+ d p... Fig. 13: State-transition diagram of the growth process (X n ), states i 0 First we look for the transition times T ij := min{n N : (X n = j X 0 = i)}, i.e. the number of segments 4, from state i to state j, also known as first passage times. With f ij (n) = P(T ij = n), see [5, for the probabilities that starting the process in state i it needs exactly n time steps to reach state j, we have for the mean value m ij := E(T ij ) = ij n f ij (n). (5.1) We notice that m ij = 0 for i > j, since in that it is f ij (n) = P(T ij < ) = 0 for the probabilities to reach a former state there is no shrinking in the growth! Now we are 4 Whenever we are going to speak of times in this paper, the closer meaning of course are the number of dropped segments. Nevertheless we prefer the term time which is by far more common on that score. Additionally we can consider that for any time there is a tossed segment, so both notions are linked. n=1

83 Characteristics 21 going to exploit this behavior in the following way. Relating to [5 it is possible to express the m ij in form of a recursion formula bringing especially the transition probabilities p ik back into the compuation: m ij = E(T ij ) = p ik m kj, i j. (*) n=1 n f ij (n) = 1 + k j Due to p ik = 0 for k < i and now m ij = 0 for i > j the sum k j is finite and the expression (*) becomes m ij = 1 + p ii m ij + p i,i+1 m i+1,j p i,j 1 m j 1,j + p i,j+1 m j+1,j +..., }{{} = 0 i.e. recombining the addends yields the scalar product (1 pii, p i,i+1,..., p i,j 1 ), (m ij, m i+1,j,..., m j 1,j ) = 1 (**) }{{} =: m Thus, the unknown transient times, gathered in the vector m, are forming a linear system of equations if we start the transitions, i.e. the growth process, in state i = 0 and developing (**) up to state j 1, hence, we have 1 p 00 p 01 p 02 p 0,j 1 m 0j p 11 p 12 p 1,j 1 m 1j p 22 p 2,j 1 m 2j = p j 1,j 1 1 m j 1,j With the submatrix P := (p iκ ) 0 i, κ j 1 of dimension j of the matrix P of all transition probabilities and an identity matrix I of appropriate dimension, succinctly we can write (I P) m = 1 (5.2) where 1 := (1, 1,..., 1) T. Especially for d = 1 we can show the following formula. Theorem 5.1. For the growth process (X n ) introduced in (2.1) the mean transition times of a passage from state 0 to state j > 0, i.e. a growth to length j, becomes j (j + 3) m 0j = E(T 0j ) =. (5.3) 4 Proof. The mean value m 0j is the first entry in the vector m, see (**) above, and can be found by solving the linear system of equations (5.2). For d = 1 this is done by using the inverse matrix (I P) 1 = (q ik ) 0 i,k j 1 we have deduced in the proof of Theorem 2.3. Thus, the first component is the sum m 0j = E(T 0j ) = (q 0k (1)) 0 k j 1, 1 = j 1 k=0 q 0k(1). According to (2.7) we have to set z = 1 and obtain ( k 1 ) ( 2 k ) 1 q 0k (1) = ν=1 ν + 2 ν=1 1 ν ν+2 which yields in summation the asserted formula (5.3). = k with k > 0,

84 Stochastic Growth Process Remark I: On the basis of this result we try to formulate another heuristic for E(X n ) asserting that E(T 0,xT (n)) = x T (n) (x T (n)+3) 4 = n is an estimation for x T (n) E(X n ), i.e. after an average time or number of m 0j = n segments we should approach at a length of j = x T (n) E(X n ). This yields E(X n ) x T (n) = 4n + ( 3 2 )2 3 2 = O( n). The diagram in Fig. 14 shows in comparison the exact value of E(X n ) and two estimations related on this heuristic and the iterative one. Fig. 14: d = 1: expected value E(X n ) (black dots) according to (2.8), estimations x T (n) and x D (n) on basis of the mean transition time (green) and sojourn time (blue), respectively, finally the estimation the iteration d λ n (red, broken line) on account of (3.5) n Remark II: To strengthen this heuristic, it is our assertion that in x T (n) we have an upper bound for E(X n ), i.e. E(X n ) x T (n). To support this idea, we compare the following events in the chain, see Fig. 15: X n = i 1, i.e. after n transitions, we stop in state i 1, including the times possibly spent in the loop of state i 1 ; n = T 0,i2, i.e. after n transitions, we stop in i 2, but just arriving! Taking into account that there is no reflux in the chain, this leads us to the conclusion that in the average i 2 i 1, i.e. presumably x T (n) E(X n ). The table 2 supports some numerical results for this estimation. Fig. 15: If we have an outcome X n = i 1, than there are at last some transitions possibly spent in the loop of state i 1 (in the top). For an outcome n = T 0,i2, the state i 2 has been just arrived (bottom) i n n i 2 Table 2: d λ n E(X n ), and x T (n) as a suggestion for the upper bound of E(X n ), numerical values for d = 1 and the indexed n, E(X n ) is computed by Algorithm 4.1 (round numbers) n λ n E(X n ) x T (n)

85 Characteristics 23 Next we are looking for the sojourn time D i, i.e. the time the process remains in the state i, once more in form of a mean value, which is E(D i ) = 1/(1 p ii ) according to [5. With transition probabilities on account of (2.3) and for i d we have E(D i ) = 1 = i + d p ii 2 d (5.4) for the expected number of segments which are tossed while the process, i.e. the melting segment remains on a length of i. Notice, that D i is a conditional value, since for d > 1 not all states are necessarily approached. Thus, an estimation x D (n) E(X n ) analogous to the approach above like n = x D (n) E(D i ) = (x D(n) d+1)(x D (n)+3d+2) must produce i=d approximations that are to low, since it takes to much time to run through all the states 4dn + ( 2d + 2) 1 2 d 3 2 as an estimation in their mean sojourn times up to x D (n) = for E(X n ), see Fig. 14. Quite different from that are the occupancy times N ik (n) = n τ=1 1 {X τ =k X 0 =i}, i k, which are the time spent in state k for the first n transitions starting in state i. According to [5 the mean value is φ ik (n) = E(N ik (n)) = n τ=1 4d p (τ) ik. (5.5) Here we have p (τ) ik for the elements of the τ-th power of the matrix of transition probabilities. Hence, the relation (5.5) can be written in the form Φ(n) := (φ ik (n)) = n τ=1 Pτ. For example, the diagram in Fig. 16 depicts the occupancy times for the states k = 0 up to 50 for d = 5 in the first n = 10 transitions always beginning in state i = 0, both for the analytically computed means and the averages by simulation. Since the state 0 is abandoned immediately in the very first transition we have φ 0k = 0 for 0 k d 1 = 4. It is conspicuous that k = d = 5 is the only state which is visited more than once. The lower bound of one is understandable, because this state k = d is always occupied just after the beginning of n = 1, i.e. after leaving the initial state k = 0. The small mean values for the occupancy times for most of the states are explicable due to the fact that not all states k > d are necessarily occupied at all. The larger the length d of the segments the smaller the probability to occupy a state k > d. So, the curve φ 0k (n) is a fascinating object, especially for a small number of transitions φ k Fig. 16: Occupancy times for d = 5 and n = 10, blue line: mean values φ 0k (n) of simulations, red dots: φ 0k (n) as a sum of the first lines of the matrix P τ from τ = 1 to 10 In the case of d = 1 this possibly skipping of states becomes impossible. However, given n transitions, not all states d < k n are necessarily occupied, of course! Now the mean values for the occupancy times for nearly half of the states are above ones, see Fig. 17.

86 Stochastic Growth Process Fig. 17: Occupancy times for d = 1 and again n = 10 transitions, blue line: mean values φ 0k (n) of simulations, red dots: φ 0,k (n) computed with formula (5.6) φ k Since we have an analytically gained expression for the mass function p 0k (r) according to (2.6) in Theorem 2.3, we can compute the mean occupancy times φ ik (n) for i = 0 on account of relation (5.5) as a double sum: φ 0k (n) = E(N 0k (n)) = Acknowledgments n τ=1 = k k! p (τ) 0k = n k τ=1 ν=1 k + 2 k! ( 1) k ν ( k ν k ( k [ ( 1) k ν ν(ν + 2) ν) k 1 1 ν=1 ) (ν + 2) k 1( ν ) τ ν + 2 ( ν ) n. (5.6) ν + 2 The author gratefully acknowledges the helpful suggestions of Prof. L. Heinrich (University Augsburg) and Prof. A. Duma (University Hagen) during the preparation of this paper. References [1 Böttcher, R., Stochastic Growth Processes based on Random Segments, ICM 2010, Hyderabad, Short Communication, Section 13: Probability and Statistics [2 Domb, C., Covering by Random Intervals and One-Dimensional Continuum Percolation, Journal of Statistical Physics, Vol. 55, No. 1-2, April 1989, pp [3 Ferrari, P.L., Spohn, H., Random Growth Models, arxiv: v2[math.pr, [4 Ferrari, P.L., Prähofer, M., One-dimensional stochastic growth and Gaussian ensembles of random matrices, arxiv:math-ph/ v2, [5 Ibe, O.C., Markov Processes for Stochastic Modeling, Elsevier Academic Press, Boston, 2009 [6 Meakin, P., Fractals, Scaling and Growth Far From Equilibrium, Cambridge University Press, Cambridge, 1998 [8 Råde, L., Westergren, B., Mathematics Handbook, 5th edition, Springer, Berlin, Heidelberg, 2004 [9 Stirzaker, D.R., Grimmett, G.R., Probability and Random Processes, Oxford University Press, Oxford, New York, 2001 Roger Böttcher, D Ludwigshafen Roger.Boettcher@FernUni-Hagen.de Eingegangen am:

87 Spaces of Lipschitz functions on metric Spaces D. Pallaschke and D. Pumplün Dedicated to Zbigniew Semadeni, the founder of Categorical Functional Analysis. Abstract In this paper the universal properties of spaces of Lipschitz functions, defined over metric spaces, are investigated. 1 Basic Properties of Lipschitz Functions Let (X,d) be a semimetric space, i.e. a metric space for which the condition d(x,y) = 0 does not imply x = y. If there is no confusion, we will omit the semimetric and will only write X instead of (X,d). Moreover, to exclude the trivial case we will assume that d 0 implies card(x) = 1. If several semimetric spaces occur, we will write (X,d X ), i.e. take the space as index. Definition 1.1 If X,Y are (semi) metric spaces with (semi) metrics d X,d Y, a mapping f : X Y is called Lipschitz iff there exists an M 0 such that d X (f(x),f(y)) Md Y (x,y) for all x,y X. (L) One puts L(f) := inf{m M 0 and d Y (f(x),f(y)) Md X (x,y) for all x,y X}. (1) L(f) is called the Lipschitz constant of f. If L(f) 1, then f is called a contraction. D. Pumplün, Faculty of Mathematics and Computer Science, FernUniversitaet Hagen, D Hagen, Germany, dieter.pumpluen@fernuni-hagen.de D. Pallaschke, Institute of Operations Research, University of Karlsruhe KIT, D Karlsruhe, Germany, diethard.pallaschke@kit.edu 1

88 Special cases of Definition 1.1 are if Y is a normed or a seminormed linear space and d Y the metric induced by the norm; in the following we will mostly study the case Y = R or Y = CI. If, for a semimetric space X, d X denotes the equivalence relation on X induced by the semimetric d X then X /, the dx set of equivalence classes, carries a canonical structure of a metric space. A Lipschitz mapping f : X Y between two semimetric spaces induces a Lipschitz mapping ˆf : X / dx Y / dy with L(f) = L(ˆf). Hence, the theory of Lipschitz mappings between semimetric spaces cannot yield more information than the theory of Lipschitz mappings between metric spaces. This is the reason why, in the following, only Lipschitz mappings on metric spaces are investigated. Lemma 1.2 Let A X be a non-empty subset of a metric space (X,d) and let dist A : X R with dist A (v) = inf x A d(v,x) be the distance function of A. Then: the distance function is a Lipschitz function with Lipschitz constant 0 L 1. if A = X then L = 0 and if A X and there exists a y X \A which has a closed point to A, i.e. there exists a z A with d(y,z) = dist A (y), then L = 1 and dist A is an isometry. Proof. Let x,y E and ε > 0 be given. By the definition of the infimum there exists a point z A with dist A (y) d(y,z) ε. We get dist A (x) d(x,z) d(x,y)+d(y,z) d(x,y)+dist A (y)+ε. By interchanging x and y dist A (x) dist A (y) d(x,y) (2) follows. Hence, the distance function is a Lipschitz function with Lipschitz constant L 1. If A = X then dist A = 0. Now assume that there exists an y X \A which 2

89 has a closed point to A, i,e. there exists a z A with d(y,z) = dist A (y), then dist A (z) dist A (y) = 0 dist A (y) = d(z,y), which implies L = 1 by (2). Remark 1.3 The Lipschitz functions on X separate points: More precisely, let (X,d) be a metric space and let {x 1,...,x n } X be a finite subset of pairwise disjoint points, i.e. x i x j for i j. Then there exist Lipschitz functions ϕ i : X R such that for i {1,...,n} 1 : i = j ϕ i (x j ) = 0 : i j Proof. For i {1,...,n} put A i := {x 1,...,x n }\{x i } and because of Lemma 1.2 define ϕ i : X R by ϕ i (x) = dist A i (x). Then ϕ dist Ai (x i ) i is a Lipschitz function with these properties. 2 Spaces of Lipschitz Functions In the following let Lip denote the category of metric spaces and Lipschitz maps and define LIip(X) := Lip(X,R), X Lip. Furthermore, we introduce the following subcategories of Lip called Lip 1 which is the subcategory defined by all contractions, Lip the subcategory generated by all metric spaces (X, d) of finite diameter, i.e. diam(x) = sup{d(x,y) x,y X} < and finally: Lip 1 = Lip 1 Lip. For a metric space X consider the space L (X) of all real valued bounded Borel-measurable functions endowed with the supremum norm. This is a Banach space for any metric space X. Put for a metric space X and take as norm Lip(X) := LIip(X) L (X) f L := max{l(f), f }. Note that the same definition makes sense for f : X E in Lip and E Vec, where Vec denotes the category of real normed linear spaces and 3

90 continuous linear maps and Vec 1 the subcategory defined by linear contractions (For more details see [5 and [9). Lip(X) carries two norms and L and we have obviously L. ( ) L (Lip(X))denotestheclosedunitballwithrespectto L and (Lip(X)) the closed unit ball with respect to. Obviously holds. One defines L (Lip(X)) (Lip(X)). C(Lip(X)) := {f f Lip(X), f(x) 0 for all x X}. C(Lip(X)) is a proper, generating cone in Lip(X), i.e Lip(X) = C(Lip(X)) C(Lip(X)). That C(Lip(X)) is proper is trivial. For every f Lip(X) and every x X one has f f(x) f, ( ) which implies that C(Lip(X)) is generating. Furthermore, C(Lip(X)) is closedwithrespectto becausec(lip(x)) = {f Lip(X) f(x) 0 } x X and, moreover, because of ( ) also with respect to L. In order to avoid mixing up both normed spaces, let Lip (X) := (Lip(X), ) and Lip L (X) := (Lip(X), L ). Moreover, let us point out that the product, as well as the pointwise maximum and minimum of two bounded Lipschitz functions is again a Lipschitz function and let us denote by 1I Lip(X) the constant function 1I(x) = 1, for all x X. 1I Lip(X) is an order unit with respect to (not L ). Furthermore, observe that Lip (X) is in general not a Banach space. To see this, take X := [0,1 the unit interval with d(x,y) = x y. Then the function f : [0,1 R with f(x) := x is not Lipschitz, but it is the uniform limit of the Lipschitz functions f n : [0,1 R with f n (x) := min{nx, x}, because sup f n (t) f(t) = 1 t [0,1 4n 2. But Lip L(X) is a Banach space ([ ). Let us recall some notations: If C is an arbitrary cone of a vector space 4

91 E, then a convex subset B C is called a base of C if every z C \{0} has a unique representation z = λb with λ > 0 and b B. Every cone C E induces a partial order by x y if and only if y x C. A partial order is called archimedian if, for some y 0 and all λ > 0 x λy implies x 0. For a,b E let [a,b := {z E a z b}. Moreover, we use the notation a b = max{a,b} if the maximum exists and correspondingly the minimum a b = min{a,b}, and write a for a ( a). A norm for E is called a Riesz norm if, for all a,b E, the inequality a b implies a b. An element e C is called an order unit if for every z E there exists a λ > 0 such that λe z λe. Remark 2.1 Let E be a vector space and C E a generating cone. If C has an order unit e C then the function z z = inf{λ > 0 λe z λe} is a norm for E, which is called an order unit norm. If C has a base B C such that the set S = conv (B B) is order bounded, then the Minkowski functional p(z) = inf{λ > 0 z λs} is a norm for E. We call this norm p a base norm and call E a base normed space. The base is denoted by B = Bs(E) and C = R + Bs(E) holds with R + = [0,+ ). For a real vector space E we denote by E the algebraic dual, that is the vector space of all linear forms from E to R. If E is endowed with a locally convex Hausdorff linear topology τ then the pair (E, τ) is called a locally convex topological vector space and we denote by E its topological dual, that is the vector space of all continuous linear forms from E to R. A Saks space is a triple (E,,τ) where is a norm on the real linear topological space E and τ is a locally convex Hausdorff linear topology τ on E such that the unit ball (E) is τ-closed and τ-bounded. For any normed vector space (E, ) the triple (E,,σ(E,E)) is a Saks space, where is the dual norm and σ(e,e) the weak-* topology on E. Proposition 2.2 ForametricspaceX the spacelip (X) := (Lip(X), ) endowed with the pointwise order of functions is a regular ordered order unit 5

92 normed space with the closed and generating order cone C(Lip(X)) the order unit 1I Lip(X) and (Lip(X)) = [ 1I,1I. Lip (X) is in general not complete. Proof. The equation (Lip(X)) = [ 1I,1I follows from ( ). The proof of the remaining assertions is straightforward. Proposition 2.3 Lip L (X) is a Banach space and the cone C(Lip(X)) is generating and L -closed but L is not a Riesz-norm with respect to C(Lip(X)). Proof. The completeness of Lip L (X) is shown in [12 Proposition 1.6.2, and that C(Lip(X)) is a proper generating cone was shown above as was the closedness. Example 2.4 Define the function 2x : x [ 1, f : R R given by f(x) := 1 : x : x 1 2, then L(f) = 2 and f = 1 which implies f L = 2, i.e. L (Lip(R)) = (Lip(R)). Note furthermore, that if A Lip(X) is -closed, then A is also L - closed, i.e. for the topologies, holds which follows from ( ). τ τ L, Remark 2.5 For a metric space X, (Lip L (X), L, ) is a Saks space with the topology of, i.e. a 2 normed linear space in the sense of Semadeni [10, who investigated these spaces which led to the introduction of Saks spaces. C(Lip(X)) is L -closed, proper and generating but does not make Lip L (X) a regular ordered Saks space because L is not a Riesz norm with respect to C(Lip(X)) (see [7). As we have seen, L implies L (Lip(X)) (Lip(X)), i.e. L (Lip(X)) is -bounded and (Lip(X)) is also L -closed. Additionally it should be noted, for further use, that C(Lip(X)) is 1-normal (see[13,prop. 9.2(e),p.86),i.e., g f himplies f max{ g, h }. Moreover, Lip (X) is a Stonian vector lattice (see [2 p.186). 6

93 As for any normed linear space (E, ), (E,,σ(E,E )) is a Saks space, we have the canonical Saks spaces (Lip(X),,σ(Lip (X),Lip (X) )) and (Lip(X), L,σ(Lip L (X),Lip L (X) )). The first one is even a regularly ordered Saks space because Lip (X) Vec + 1 (see [7, Ex. 3.2 iii) ). 3 The Duals of Lipschitz Functions Spaces Now the connection between Lip (X) and Lip L (X) will be investigated as well asbetween their (topological)dual spaces Lip (X) andlip L (X) which are both linear subspaces of Lip(X). Inequality ( ) implies that in the commutative diagram Id Lip L (X) Lip (X) # λ Id λ Lip (X ) R the identity map is a contraction but not a quasi-isometry( see [12, pp. 3-4). The dual norm of Lip (X) will be denoted by and the dual norm of Lip L (X) by L.NowIdinducesaninjectivecontractionκ X : Lip (X) Lip L (X) with κ X (λ) := λ Id, which may be considered as an inclusion, hence we often write κ X (λ) := λ. For λ Lip (X) one has: κ X (λ) L = sup{ λ(f) f L 1} sup{ λ(f) f 1} = λ, (because of ( )) i.e. taking κ X as inclusion, then L. ( ) Hence, we have (Lip(X) L(Lip(X)), 7

94 i.e. one may regard Lip (X) as a subspace of Lip L (X). In the following we will show that in general Lip (X) is a proper subspace of Lip L (X), Lip (X) = Lip L (X). For this we use the construction of point derivations from D. R. Sherbert [11 (see also [12 Chapter 7) which we briefly outline. Consider the real Banach space l := {x := (x n ) n N (x n ) n N bounded sequence } endowed with the supremum norm x := sup x n. n N Let c l denote the closed subspace of all convergent sequences and let lim : c R be the continuous linear functional which assigns to every convergent sequence its limit. Consider a norm-preserving Hahn-Banach extension LIM of the functional lim to l : c l lim R LIM with the following additional properties: i) LIM n x n = LIM n x n+1, ii) liminf x n LIM n x n limsupx n. n n We shall use the the notation LIM n x n = LIM(x) for x := (x n ) n N l. These functionals LIM are called translation invariant Banach limits. (see [4, Chapter II.4, Exercise 22) Now let (X,d) be a metric space and := {(x,x) X X x X} the diagonal of X X. For a double sequence w ((X X)\ ) N, w := (x n,y n ) n N ) one defines the sequence {( ) } f(yn ) f(x n ) T w (f) := n N, f Lip(X). d(y n,x n ) 8

95 This yields a mapping ( ) T w : Lip(X) l f(yn ) f(x n ) with T w (f) := d(y n,x n ), n N which satisfies T w (f) L(f) f L, i.e. a continuous linear map T w : Lip L (X) l. Hence for any Banach limit LIM the composition D w = LIM T w is a continuous linear functional i.e. D w Lip L (X). D w : Lip L (X) R with D w (f) = LIM(T w (f)), As the definition of D w resembles the classical definition of the derivation of functions, oneisinterested ifandunder which assumptions D w isaderivation in the sense of Bourbaki [3 i.e. for f,g Lip(X) and x X, D w (fg) = fd w (g)+gd w (f) (PR ) holds, i.e. in our case, as the left side does depend (directly) on x X while the right hand side does, the modified version D w (fg) = f(x)d w (g)+g(x)d w (f) (PR) where the dependence of D w (fg) on x X has to be specified. The answer to this is (see [11, Prop. 8.5) Proposition 3.1 If (X,d) is a metric space and for w ((X X)\ ) N, w := (x n,y n ) n N ) satisfies x 0 = lim n x n = lim n y n, i.e. x 0 X is non-isolated, then (PR) is satisfied and D w is called a point derivation in x 0. Proof. If (a n ) n N,(b n ) n N l, then, for convergent (a n ) n N, with a = lim n a n one has for any Banach limit LIM LIM n (a n b n αb n ) = 0, because a n b n αb n = a n α b n B a n α if (b n ) n N = B, which implies lim n (a n b n αb n ) = LIM n (a n b n αb n ) = 0. Hence LIM n (a n b n ) = αlim n b n. follows. This implies, for f,g Lip(X) 9

96 D w (fg) = LIM(T w (fg)) ( ) (fg)(yn ) (fg)(x n ) = LIM n d(y n,x n ) ( = LIM n f(y n ) g(y n) g(x n ) d(y n,x n ) ( ) g(yn ) g(x n ) = f(x 0 )LIM n d(y n,x n ) = f(x 0 )D w (g)+g(x 0 )D w (f) which completes the proof. + g(x n ) f(y ) n) f(x n ) d(y n,x n ) + g(x 0 )LIM n ( f(yn ) f(x n ) d(y n,x n ) In [11, Prop. 8.5, it is shown that for x 0 isolated any D w 0 if w fulfills the condition of Proposition 3.1. ) Proposition 3.2 In general, Lip (X) is a proper subspace of Lip L (X), Lip (X) = Lip L (X). Proof. Consider the metric space X := [0,1 with the usual metric d(x,y) = x y. Let D w Lip L ([0,1) be any point derivation for x = 0 and define the sequence ϕ k, k N, k > 1, in Lip L (X) by { } kx, k ϕ k (x) := max 1 k (x 1). Then maxϕ k (x) = ϕ k ( 1 k ) = 1 k. This implies that the sequence (ϕ k ) nk N converges to 0 with respect to. A trivial calculation shows D w (ϕ k ) = k. Since the constant function 0 has D w (0) = 0 and the sequence (D w (ϕ k )) k N is unbounded, it follows that D w Lip (X) which completes the proof. 4 The Space of Point Functionals A central role in our investigations plays, for a metric space X, x X, the mapping δ X : X Lip(X) with δ X (x)(f) := f(x), f Lip(X). 10

97 First note that for every x X the linear functional δ X (x) Lip(X) is continuous with respect to both norms and L. The restriction of δ X to Lip (X) is denoted by δx and to Lip L(X) by δx L. The upper indicees are omitted if misunderstandings are not possible. Proposition 4.1 Let (X, d) be a metric space. Then (a) For all x X, δx (x) = δl X (x) L = 1 holds. (b) δx L : X Lip L(X) is a contraction. (c) δ X is injective and δ X (X) is a linearly independent set. Proof. (a): For both norms and L one has δx L (x)(f) L = sup{ δ X(x) f L (Lip(X))} = sup{ f(x) f L (Lip(X))} sup{ f(x) f 1} = δx(x) max{ f,1} max{ f L,1} 1. Hence, as 1I(x) = 1, δ X (x) L = δ X(x) = 1 follows. (b): One has δx L (x) δl X (y) L = sup{ δx L (x)(f) δl X (y)(f) f L 1 } = sup{ f(x) f(y) f L 1} sup{ f L d(x,y) f L 1} = d(x,y). To prove (c) let α 1,α 2,...,α n R and x 1,x 2,...,x n X be given with n x i x k for i k, and assume that α j δ X (x j ) = 0 holds. By Remark 1.3 j=1 there exist ϕ 1,ϕ 2,...,ϕ n Lip(X) with 1 : i = j ϕ i (x j ) = δ ij = 0 : i j n Now α j δ X (x j )(ϕ i ) = 0 yields α i = 0. j=1 Next we consider the linear subspace generated by the point functionals { } n D(X) := δ X := λ = α i δ X (x i ) x 1,...,x n X, α 1,...,α n R,n N of Lip(X). i=1 11

98 Remark 4.2 By D L (X) we denote the space D(X) endowed with the dual norm L and by D (X) we denote the space D(X) endowed with the dual norm. The canonical injections are δx L and δ X. At the beginning of section 2 it was already pointed out, that a normed linear space E and a metric space X with metric d, the notions Lipschitz function for a mapping f : X E and ϕ as well ϕ L are defined analogously to the case E = R. Hence, for δx and δl X one may try to compute these norms. As δx is not Lipschitz only δl X remains. For the sake of brevity we denote the -norm of δx L : X Lip L(X) with δx L and get from Proposition 4.1 δ L X = sup{ δ X (x) L x X} = 1, which implies δ L X L = 1 for the L-norm, as L ( δ L X) 1. Theorem 4.3 Let X be a metric space, E Vec 1 with norm and ϕ : X E, ϕ Lip with ϕ L 1. Then there exists a unique ϕ 0 : D L (X) E in Vec 1 with ϕ = (ϕ 0 )δ L X and ϕ 0 = ϕ L such that X δ L X ϕ (D L (X)) (E) (ϕ 0 ) commutes. Proof. We may assume that ϕ 0 because otherwise the statement is trivially true. The assumption ϕ L 1 implies ϕ(x) (E), hence we may restrict ϕ to (E) in its image domain. As misunderstandings are not possible we will denote the restriction also by ϕ. As {δ X (x) x X} is a basis of D L (X) the linear mapping ϕ 0 : D(X) E is well defined by ( n ) n ϕ 0 α i δ X (x i ) := α i ϕ(x i ) i=1 and satisfies ϕ = ϕ 0 δ X. A routine calculation shows that ϕ 0 is a linear mapping. In order to prove ϕ 0 = ϕ L, let λ E with λ 0, then i=1 λ ϕ L λ ϕ L and hence 12 λϕ λ ϕ L (Lip L (X)).

99 For ξ = n α i δx L (x i) D L (X) there exists a λ ξ E with λ ξ = 1 and i=1 λ ξ (ϕ 0 (ξ)) = ϕ 0 (ξ) E. Now: ( ( n )) ( n ) ϕ 0 (ξ) E = λ ξ ϕ 0 α i δx L (x i) = λ ξ α i ϕ(x i ) = i=1 i=1 n α i λ ξ ϕ(x i ) ( n λ ξ ϕ n ) (λξ ) ϕ L α i (x i ) ϕ i=1 L = ϕ L α i δx(x L ϕ i ) ϕ i=1 L { } n = ϕ L sup ξ(f) = α i f(x i ) f L (Lip(X)) ϕ L ξ L which implies ϕ 0 ϕ L 1. i=1 On the other hand, ϕ L = ϕ 0 δ L X L ϕ 0 δ L X L = ϕ 0 (because of Remark 4.2 ) and hence ϕ L = ϕ 0 follows. i=1 Remark 4.4 For a metric space X, Theorem 4.3 means that the mapping δ L X (D L(X)) is universal with respect to all Lipschitz mappings ϕ : X E, ϕ L 1 where E is a norned real linear space. This is equivalent to the following statement: The unit ball functor : Vec 1 Lip 1 has D L : Lip 1 Vec 1 as a left adjoint. Proof. The proof is elementary because maps a contraction in Vec 1 to a contraction in Lip 1 and Theorem 4.3 states that δ L X (D L(X)) is a canonical universal embedding for a metric space into a normed real linear space. Proposition 4.5 Let (X,d) be a metric space. Then D (X) is a base normed linear space with base Bs(D (X)) = conv({δ X (x) x X}), the convex hull of δ X (X), the order cone C(D (X)) = R + Bs(D (X)) and the base norm k k λ B = α i δ X (x i ) = α i B i=1 i=1 if λ = k α i δ X (x i )is the representation of λ in the basis δ X (X). i=1 13

100 Proof. Lip (X) is an order unit normed linear space (see Proposition 2.2) with order unit 1I. The order in Lip (X) is pointwise. Hence, well-known results (cp, e.g. [13, chpt. 9) imply that Lip (X) is a base ordered linear space with cone C(Lip (X) ) := {λ λ Lip (X), λ(f) 0 for all f C(Lip (X))} and base Bs(Lip (X) ) = {λ f Lip (X), λ 0 and λ(1i) = 1} We define = C(Lip(X) ) {λ λ Lip (X) and λ(1i) = 1}. C(D (X)) := C(Lip (X) ) D (X) and B := Bs(Lip (X) ) D (X). Of course, B is a base set in D (X) with B C(D (X)), hence, R + B C(D (X)). n If λ := α i δ X (x i ) C(D (X)), then Remark 1.3 yields at once α i i=1 0, 1 i n. The converse implication is obviously true. If λ > 0, then n α := α i > 0 follows and i=1 holds. Ontheotherhand, λ = λ(1i) = λ = α n i=1 α i α δ X(x i ) (1) n α i δ X (x i ) B isequivalenttoλ(1i) = 1,i.e. i=1 n α i = 1.ThisshowsthatB istheconvex hullof{δ X (x) x X}, i=1 i.e. B = conv {δ X (x) x X} and because of (1), C(D (X)) = R + B, hence Bs(D (X)) := B is a base for C(D (X)). For a basis representation of λ = n i=1 α iδ X (x i ) D (X), λ 0 put α + := n α i, α := i=1 α i 0 n α i. i=1 α i <0 Then λ = α + λ + α λ (2) 14

101 with λ + = n i=1 α i >0 α i α + δ X(x i ) for α + 0 and λ + := 0 else, and analogously for λ. As a subset of Bs(Lip (X) ) the set Bs(D (X)) is linearly bounded. Hence the base seminorm induced by Bs(D (X)) is a norm (cp. [13). We denote this norm by 0 for the moment. One of the possible representations for a base norm is: λ 0 = inf{β +γ β,γ 0, λ = βξ γη, ξ,η Bs(D (X)) }. n For λ := α i δ X (x i ) (2) implies i=1 n λ 0 α + + α = α i. Now, let λ = βφ γψ be a second representation. We may take the union of δ X (x), x X, which appear inthebasis representation of λ,φ,andψ anddenoteitby{δ X (x i ) x i X, 1 i n}.letλ = α i δ X (x i ), φ = ϕ i δ X (x i ), and ψ n n = where ϕ i 0 and ψ i 0 for 1 i n. Then i=1 i=1 i=1 n n α i δ X (x i ) = (βϕ i γψ i )δ X (x i ) i=1 i=1 n ψ i δ i=1 hence α i = (βϕ i γψ i ) and α i = βϕ i γψ i βϕ i +γψ i. n n n As φ,ψ Bs(D (X)) one has ϕ i = ϕ i = 1 and α i β +γ follows such that λ B = i=1 i=1 n α i bas been proved. i=1 In view of Theorem 4.3 it is natural to ask if a similar result can be proved for δ X : X Bs(D (X)). This is indeed the case as the following Proposition shows. Let us denote the category of base-normed linear spaces and linear base preserving mappings (which are, by the way contractions) by BN-Vec 1. If, for E BN-Vec 1, Bs(E) denotes the base of E, then 15 i=1

102 Proposition 4.6 Let X be a metric space, E BN-Vec 1 and ϕ : X Bs(E) a Lipschitz mapping. Then there exists a unique ϕ 0 : D (X) E in BN-Vec 1 such that X δ X ϕ Bs(D (X)) Bs(E) Bs(ϕ 0 ) commutes, and 1 = ϕ 0 = ϕ, where ϕ = sup{ ϕ E x X}. Proof. AsδX (X)isabasisofD (X)thelinearmappingϕ 0 : D (X) E is well defined by ( n ) n ϕ 0 α i δ X (x i ) := α i ϕ(x i ) i=1 and makes the above diagram commutative. The proof of the equality of norms is trivial. As ϕ 0 is a contraction ϕ 0 1 holds. Also ϕ = 1 as ϕ(x) Bs(E). Moreover i.e. ϕ 0 = 1. i=1 1 = ϕ(x) E ϕ 0 δ X (x) = ϕ 0 1, Despite the fact that the Lipschitz constant or the norm L does not appear explicitly in Proposition 4.6 the result is nonetheless quite interesting for metric spaces. Corollary 4.7 Let (X,d X ) be a metric space. Then ( i) δ X : X Bs(D (X)) is a universal embedding into the base of the based normed linear space D (X), i.e. the canonical functor Bs : BN-Vec 1 Lip 1 has the functor D : Lip 1 BN-Vec 1 as a left adjoint with the adjunction morphism δ X : X Bs(D (X)). 16

103 (ii) δ X : X Bs(D (X)) is a universal contractive embedding into a metric convex module, i.e. if C is a convex module (see [6) and f : X C is in Lip 1, then there is a unique affine mapping f 0 : Bs(D (X)) C with f = f 0 δ X. If Met-Conv denotes the category of metric convex modules, or, what is the same, of metric convex subsets of real linear spaces [6 and affine mappings, Bs(D (X)) may be regarded as a functor BsD : Lip 1 Met-Conv which is left adjoint to the canonical forgetful functor U : Met-Conv Lip 1 assigning to every C Met-Conv its underlying metric space and δx induces the adjunction morphism. Proof. i) is just a reformulation of Proposition 4.6. ii) results by straightforward arguments from Proposition 4.6 and the results in 2 of [6. Corollary 4.7 ii) shows an interesting fact, namely the canonical and close connection between metric and convex structures. 5 The Predual There is another interesting topology, which was first investigated by R.F. Arens and J. Eells [1, and which will be discussed in this section. The following proof uses a method completely different from the one used in [1 and is considerably shorter. Define (X) : Lip L (X) D L(X) and ℸ : D L(X) Lip L ג by and (X) ( f)(λ )ג := λ(f), f Lip L (X), λ D L ℸ(λ)(x) := λ(δ L X(x)), λ D L(X), x X. ג Theorem 5.1 : Lip L (X) D L (X) and ℸ : D L (X) Lip L(X) are in Vec 1 and D ℸ ג = id L (X) and ג ℸ = id LipL (X), x X holds. Hence, ג and ℸ are isometries. 17

104 Proof. Let us denote the norm dual to L of D L(X) on D L (X) by # L. For x,y X one has if λ D L (X) : ℸ(λ)(x) ℸ(λ)(y) = λ ( δ L X (x)) λ ( δ L X (y)) Hence L(ℸ(λ)) λ # L and we have λ # L (δl X(x) δx(y) L L λ # Ld(x,y) (because of Proposition 4.1 ). follows. Moreover ℸ(λ)(x) = λ ( δ L X(x) ) λ # L δl X(x) L = λ # L ℸ(λ) = sup{ ℸ(λ)(x) x X} = sup{ λ ( δ L X (x)) x X} λ # L sup{ δl X (x) x X} λ # L. This yields ℸ(λ) L = max{ ℸ(λ),L(ℸ(λ))} λ # L i.e. ℸ 1, ( ) ℸ is a contraction. For f Lip L (X) one gets { } # ( f )ג L = sup ( f)(λ )ג λ D L (X) and λ L 1 { } { } = sup λ(f) λ D L (X) and λ L 1 λ L sup f L λ L 1 f L, which gives { ( f )ג # L = sup ג 1. ג contraction, is also a ג i.e. } f L 1 1. ( ), For λ D L (X) and x X X, (( ℸ(λ )ג ( δ L X(x) ) = δ L X(x)((ℸ(λ)) = ℸ(λ)(x) = λ ( δ L X(x) ), holds for all x which yields ( ℸ(λ ג = λ, because { δ L X (x) x X} is a basis of D L (X) and hence D ℸ ג = id L (X). Also for f Lip L (X) and any x X ℸ(ג(f))(x) = ( f )ג ( δx L (x)) = δx L (x)(f) = f(x) which results in ℸ(ג)(f) = f and hence ג ℸ = id LipL (X), i.e. ℸ and ג are inverse to each other. This together with ( ) and ( ) yields the assertion 18

105 Remark 5.2 To show the dependence of X, an index X will be added: ג X and ℸ X, because both are natural transformations between interesting functors. The interesting topology mentioned at the beginning of this section is the dual topology σ(d L (X),D L(X)) transferred by ℸ (and (ג to Lip L (X). References [1 R.F. Arens and J. Eells,Jr., (1956): On embedding uniform and topological spaces, Pacific Journ. of Math. 6, [2 H. Bauer, (2002): Wahrscheinlichkeitstheorie (5te Auflage), de Gruyter Lehrbuch, Walter de Gruyter & Co., Berlin. [3 N. Bourbaki, (1950): Éléments de mathématique. XI. Première partie: Les structures fondamentales de l analyse, no Hermann et Cie., Paris. [4 N. Dunford and J.T. Schwartz, (1957): Linear Operators: Part I, Interscience Publishers, Inc., New York. [5 D. Pumplün,(1999): Elemente der Kategorientheorie, Hochschultaschenbuch, Spektrum Akademischer Verlag, Heidelberg, Berlin [6 D. Pumplün, (2002) The metric completion of convex sets and modules, Result.Math. 41, [7 D. Pumplün,(2011) A universal compactification of topological positively convex sets, Journ. Convex Anal. 18 (4), [8 S. Rolewicz, (1972): Metric Linear Spaces, PWN Polish Scientific Publishers, Warszawa and D. Reidel Publishing Company, Dordrecht. [9 Z. Semadeni, (1971): Banach Spaces of Continuous Functions, Vol. I PWN Polish Scientific Publishers, Warszawa [10 Z. Semadeni, (1979) Some Saks-space dualities in harmonic analysis on commutative semigroups, Special topics of applied mathematics (Proc. Sem., Ges. Math. Datenverarb., Bonn, 1979), 7187, North-Holland, Amsterdam-New York, [11 D. R. Sherbert, (1964) The structure of ideals and point derivations in Banach Algebras of Lipschitz functions, Trans AMS 111,

106 [12 Nik Weaver, (1999): Lipschitz Algebras, World Scientific, Singapore, New Jersey, London, Hong Kong. [13 Yau Chuen Wong and, Kung Fu Ng, (1973): Partially ordered topological vector spaces. Oxford Mathematical Monographs. Clarendon Press, Oxford. 20 Eingegangen am:

107 Intersection probabilities for random convex bodies and lattices of parallelograms (II) Uwe Bäsel Andrei Duma Abstract We calculate the probabilities that a small convex body intersects 1, 2 or 3 parallelograms of a periodic lattice of congruent parallelograms generated by a family of equidistant lines and a family of line segments. Furthermore, the expectation of the number of intersected parallelograms is computed. As a corollary, we derive the probabilities that the convex body intersects a line and a line segment at the same time, a line or a line segment, and a line segment of the lattice. Formulas and references are given for the special cases that the convex body is a rectangle, a line segment (needle), a regular polygon, and an orbiform. For the needle our results coincide with the results of Santaló. AMS Classification: 60D05, 52A22 Keywords: random convex sets, intersection/hitting probabilities, lattice of parallelograms, Buffon problem, regular polygons, sets of constant width, orbiforms 1 Introduction We consider the random throw of a plane convex body C onto a plane with a periodic lattice S a, b, α of congruent parallelograms P (see Fig. 1). S a, b, α can be regarded as the union of the lattice S a of the slanting line segments and the Buffon lattice R b of the horizontal parallel lines. Laplace [5, pp found the intersection probability for a line segment (needle) of length l min(a, b) and the lattice R a, b, α with α = π/2. R a, b, α, which is the union of two Buffon lattices R a and R b of parallel lines, is shown in Fig. 2. Santaló [7, pp. 166/167 (see also [8, p. 139) calculated the probabilities that a needle intersects 0, 1 and 2 lines of the lattice R a, b, α and lines/line segments of the lattice S a, b, α, respectively, where the length of the needle is assumed to be so small that it cannot intersect more than one of the horizontal lines and one of the slanting lines/line segments at the same time. The respective probabilities for R a, b, α and S a, b, α are equal. For the lattice R a, b, α in Fig. 2 a lot of further results are already known: Duma and Stoka [4, pp. 971/972 found the probability that an ellipse intersects one line of R a, b, π/2. Ren and Zhang [6, p. 320 derived for R a, b, α a

108 Fig. 1: Lattice S a, b, α = S a R b Fig. 2: Lattice R a, b, α = R a R b general formula for the probability that a convex body C has exactly i intersections with R a and at the same time j intersections with R b. Bäsel [2 calculated the probability that a small regular n-gon (n 2) intersects R a, b, α (or S a, b, α ). Results concerning the independence of events are to be found in Ren and Zhang [6, p. 325, Aleman et al. [1 and Bäsel [2. Bäsel and Duma [3 recently found the probabilities that a small convex body C intersects 1, 2, 3 and 4 parallelograms of R a, b, α. In the following, we will use the notations: F P = area of the parallelogram P, L P = perimeter of P, F C = area of the convex body C, L C = perimeter of C, w(φ) = width of C in the direction φ (see Fig. 3). We assume C to be small so that it cannot intersect more than three parallelograms at the same time. Our aim is to calculate the probabilities p(i), i = 1, 2, 3, that C intersects exactly i parallelograms of S a, b, α. 2

109 Intersection probabilities Theorem 1. The probabilities p(i) that C intersects exactly i parallelograms of S a, b, α are given by with p(1) = 2π(F P + F α (w)) L P L C 2πF P, p(2) = L PL C 2π(F C + 2 F α (w)) 2πF P, p(3) = F C + F α (w) F P F P = ab sin α, L 2(a + b) P = sin α, Fα (w) = 1 π w(φ)w(φ + α) dφ, π sin α 0 and the expectation of the random number Z of intersected parallelograms by E(Z) = 2π(F P + F C ) + L C L P 2πF P. Fig. 3: C intersects three parallelograms (situation for fixed value of the angle φ) 3

110 Proof. We choose a fixed reference point O and a fixed line segment σ (starting from O) inside the convex body C, see Fig. 3. φ is the angle between the reference direction perpendicular to the slanting lines of S a, b, α and σ. (The thick lines are the boundaries of the parallelograms of S a, b, α.) For fixed value of φ, C intersects exactly three parallelograms of S a, b, α if O is inside one of the grey coloured sets. The dashed lines are a congruent copy of S a, b, α. In every parallelogram of this copy there are exactly two grey coloured sets, e. g. the sets with numbers 1 and 2 respectively. Let F i (φ), i = 1, 2, be the area of set i. Hence the conditional probability for fixed φ that C intersects exactly three parallelograms is given by One finds that p(3 φ) = F 1(φ) + F 2 (φ) F P. F 1 (φ) + F 2 (φ) = F C + F α (w(φ)), where w(φ)w(φ + α) F α (w(φ)) := sin α is the area of a parallelogram with respective distances w(φ) and w(φ + α) between opposite sides and angle α. With the density function f of the uniformly distributed random variable φ, f(φ) = we get the (total) probability p(3) = π 0 = 1 πf P p(3 φ) f(φ) dφ = 1 πf P π For abbreviation we put and get 0 1 π if 0 φ π, 0 else, π [F C + F α (w(φ)) dφ = 1 F P F α (w) := 1 π 0 π 0 [F 1 (φ) + F 2 (φ) dφ ( F C + 1 π ) F α (w(φ)) dφ. π 0 F α (w(φ)) dφ p(3) = F C + F α (w) F P. Clearly, Fα (w) is the mean value (expectation) of the function F α (w(φ)) with φ uniformly distributed in the interval [0, π. The probability p(1) is known from [1, p. 302/303 and [3: p(1) = 1 L PL C + F α (w) = 2π(F P + F α (w)) L P L C. 2πF P πf P 2πF P 4

111 For p(2) we have p(2) = 1 p(1) p(3), and therefore p(2) = L PL C F C 2 F α (w) = L PL C 2π(F C + 2 F α (w)). 2πF P F P F P 2πF P For the expectation we find E(Z) = 1 p(1) + 2 p(2) + 3 p(3) = 1 + F C F P + L PL C 2πF P = 2π(F P + F C ) + L P L C 2πF P. Remark 1. Fα (w) sin α/(ab) is identical with P (A B) (see [1, p. 306). P (A B) is the event that C thrown at random onto R a, b, α intersects R a (event A) and R b (event B) at the same time. Remark 2. The expectation E(Z) is the same as for R a, b, α [3. This is a special case of a more general result, see [8: formula (8.7) on p. 132 and the theorem on p Note that E(Z) does not depend on F α (w). It is sufficient to know the area F C and the perimeter L C of C. So it is easier to compute E(Z) than to compute p(1), p(2) and p(3). Remark 3. The probabilities and the expectation may be written in the form: p(1) = 1 (a + b)l C πab p(2) = (a + b)l C πab p(3) = (F C + F α (w)) sin α ab + F α (w) sin α ab (F C + 2 F α (w)) sin α ab,, E(Z) = 1 + (a + b)l C πab, + F C sin α ab Let A S denote the event that C intersects S a (that is one line segment of S a ), and as above B the event that C intersects R b (one line of R b ). According to Barbier, the probability P (B) that C intersects R b is equal to L C /(πb). Since C intersects three parallelograms of S a, b, α if and only if it intersects one line segment of S a and one line of R b, we easily find the result of the following corollary. Corollary 1. The probabilities P (A S B), P (A S B) and P (A S ) are given by P (A S B) = p(3) = F C + F α (w) F P = (F C + F α (w)) sin α ab P (A S B) = 1 p(1) = p(2) + p(3) = L PL C 2π F α (w) 2πF P, = (a + b)l C πab F α (w) sin α ab 5,,.

112 P (A S ) = P (A S B) + P (A S B) P (B) = L C πa + F C = L C F P πa + F C sin α. ab Remark 4. The probability P (A S ) also follows from the general formula p = 2π(F 0 + F 1 ) + L 0 L 1 2πα 0 in [8, p. 140 for a random convex set and a lattice of convex sets. In our case, each parallelogram P of S a, b, α contains exactly one line segment of S a with area F 0 = 0 and perimeter L 0 = 2b/ sin α. α 0 = F P is the area of one parallelogram. Furthermore we have F 1 = F C and L 1 = L C. Hence P (A S ) = p = 2π(0 + F C) + (2b/ sin α) L C 2πF P = F C bl C + F P πf P sin α = F C + L C F P πa. 3 Some special cases 3.1 C is a rectangle or a needle Let s and t denote the side lengths of the rectangle. One finds L C = 2(s + t), F C = st and [ ( (π 2α) s F 2 + t 2) + 4st cos α + 2 ( s 2 + t 2 + 2αst ) sin α α (w) =, 2π sin α see [3. For t = 0 we find with L C = 2s and F C = 0 for a needle of length s and hence F α (w) = s2 [(π 2α) cos α + 2 sin α 2π sin α 2(a + b)s p(1) = 1 πab 2(a + b)s p(2) = πab + s2 [(π 2α) cos α + 2 sin α 2πab s2 [(π 2α) cos α + 2 sin α πab p(3) = s2 [(π 2α) cos α + 2 sin α 2πab It also holds true for the lattice R a, b, α. This is the result of Santaló [7, p. 166/167, [8, p ,, 6

113 C is a regular polygon Let l denote the radius of the circumscribed circle of the regular n-gon. We have F C = 1 2 nl2 sin 2π n, L C = 2nl sin π n. Formulas for F α (w) for even n and odd n may easily be obtained from [2, pp. 248/249, 256 using 3.3 C is an orbiform F α (w) = ab P (A B). sin α An orbiform is a plane convex body of constant width d. All orbiforms of width d have perimeter π π L C = w(φ) dφ = d dφ = πd. 0 0 For all orbiforms of width d we have F α (w) = d 2 / sin α, hence p(1) = 1 p(3) = F C sin α ab (a + b)d ab + d2 ab. + d2 ab, (a + b)d p(2) = F C sin α 2d2 ab ab ab, The circle with diameter d is an orbiform with F C = πd 2 /4, hence p(1) = 1 (a + b)d ab p(3) = πd2 sin α 4ab + d2 ab. + d2 ab, (a + b)d p(2) = πd2 sin α 2d2 ab 4ab ab, 3.4 C is a square A square of side length s is a special case of a rectangle or a regular polygon. If this square is thrown onto a lattice S a, a, π/2 of squares with a 2 2 s, then the probabilities are given by p(1) = 1 8 π p(3) = ( π s a + ) ( s a ( π ) 2. ) ( s ) 2 8, p(2) = a π ( s a π ) ( s ) 2, a 7

114 References [1 A. Aleman, M. Stoka, T. Zamfirescu: Convex bodies instead of needles in Buffon s experiment, Geometriae Dedicata 67 (1997), [2 U. Bäsel: Buffon s problem with regular polygons, Beitr. Algebra Geom. 53 No. 1 (2012), [3 U. Bäsel, A. Duma: Intersection probabilities for random convex bodies and lattices of parallelograms, Fernuniversität Hagen: Seminarberichte aus der Fakultät für Mathematik und Informatik. [4 A. Duma, M. Stoka: Hitting probabilities for random ellipses and ellipsoids, J. Appl. Prob. 30 (1993), [5 P.-S. Laplace: Théorie analytique des probabilités, Mme Ve Courcier, Paris, [6 D. Ren, G. Zhang: Random convex sets in a lattice of parallelograms, Acta Math. Sci. 11 (1991), [7 L. A. Santaló: Sur quelques problèmes de probabilités géométriques, Tôhoku Math. J. 47 (1940), [8 L. A. Santaló: Integral Geometry and Geometric Probability, Addison- Wesley, London, Uwe BÄSEL Andrei DUMA HTWK Leipzig FernUniversität in Hagen Fakultät für Maschinenbau Fakultät für Mathematik und Energietechnik, und Informatik Leipzig, Germany Hagen, Germany uwe.baesel@htwk-leipzig.de Mathe.Duma@FernUni-Hagen.de 8 Eingegangen am:

115 SPONTANEOUS AMPLIFICATION OF THERMAL NOISE BY A SLACK JOINT Eugen Grycko 1, Werner Kirsch 2, Tobias Mühlenbruch 3 1,2,3 Department of Mathematics and Computer Science University of Hagen Universitätsstrasse 1 D Hagen, GERMANY 1 eugen.grycko@fernuni-hagen.de 2 werner.kirsch@fernuni-hagen.de 3 tobias.muehlenbruch@fernuni-hagen.de Abstract: Recently the phenomenon of an enforced thermal noise amplification in a conductor exposed to an electrostatic field has been discovered in a laboratory experiment and qualitatively explained by computations concerning a simplified quantum mechanical model. The dispersion of the velocity operator of an electron at thermal equilibrium has turned out to be an appropriate indicator of the thermal noise level inherent in a metal. In the present contribution the behavior of this indicator is studied for an electron confined within a lattice which is interpreted as a quantum mechanical position space in the sense of a tight binding model. It turns out that the dispersion of the velocity operator increases when a narrow gap is arranged between two pieces of a metal. An empirical pendant of this computational result is also reported. AMS Subject Classification: 82B30 Key Words: Hamiltonian, quantum mechanical Gibbs state, dispersion of the velocity operator 1. Introduction In [3 and [4 an experimental and a theoretical confirmation of the phenomenon of thermal noise amplification in a conductor are reported; the conductor is exposed to an electrostatic field which is realized by 1

116 imposing a high dc voltage on a parallel-plate capacitor. The thermal noise level is measured as the dispersion of the voltage signal between appropriate points of the conductor. This phenomenon is noteworthy insofar as it can be realized by applying only negligible electric power. In the present contribution we reintroduce a simplified quantum mechanical model of a metal. We consider an electron which is sited within a discrete position space and compute the corresponding Hamiltonian (Section 2). This kind of quantum mechanical models (tight binding models, cf. [1, p. 168) is subject of intensive explorations (cf. [1, [2 and literature cited therein) and offers an attractive field for computer experimental studies. In Section 3 we reintroduce the quantum mechanical Gibbs state describing an electron at thermal equilibrium; an example illustrates the physical plausibility of this simplified description. In Section 4 the quantum mechanical dispersion of the velocity operator is defined; in [4 this dispersion is proposed as an indicator of the thermal noise level inherent in a metal. We report on the outcome of a computer experiment predicting thermal noise amplification by a slack joint between two pieces of a metal based on the introduced quantum mechanical model. In Section 5 an empirical confirmation of the predicted phenomenon is reported. 2. A Hamiltonian for an Electron in a Discrete Position Space Let us consider a finite lattice L a := {na n = 1,..., N} of N points modeling a discrete position space; parameter a > 0 is called lattice constant. In a simplified tight binding model a quantum mechanical state of an electron is described by a function ϕ : L a C satisfying the condition N ϕ(na) 2 = 1. n=1 In this context ϕ(na) 2 is interpreted as the probability of spatial association of the electron with lattice point na L a. By a standard identification, the set of all electronic states can be viewed as the unit sphere in C N. The quantum mechanical momentum operator p : C N C N is defined

117 by (2.1) ( pϕ)(na) = i ϕ((n + 1)a) ϕ((n 1)a) 2a (n = 1,..., N) where denotes the (reduced) Planck constant; in (2.1) the convention ϕ(na) = 0 for n < 1 and for n > N is applied and can be interpreted as Dirichlet boundary condition (cf. [2, p. 28ff). p is self-adjoint and serves as a discrete central difference approximation of the 1-dimensional momentum operator i for the position space modeled by the real line. Accordingly, the kinetic energy of an electron is expressed by the operator p 2 /2m where m denotes the electronic mass. 2.1 Remark: All entries in the matrix representing operator p are purely imaginary; consequently, all entries in the matrix representing p 2 /2m are real numbers. Let 1 N L1 < N R1 N L2 < N R2 N be four positive integers. Now we introduce potential U : L a R by ed for N L1 n N R1 (2.2) U(na) = ed for N L2 n N R2 0 elsewhere where e denotes the charge of an electron and ed is interpreted as the depth of potential wells defined in (2.2). Put d dx d := N L2 N R1. If d = 0, then U has only one potential well whose position is interpreted as the localization of an 1-dimensional piece of a metal placed within L a. If d > 0, then U corresponds to two pieces of a metal separated by a gap of width d a. To describe an electron confined within lattice L a by a simplified tight binding model we introduce the Hamiltonian H := p2 2m + Û 3

118 whose potential term Û is represented by the (diagonal) matrix { U(na) if n = m Û(n, m) = 0 if n m where n, m = 1,..., N. 2.2 Remark: All entries in the matrix describing operator H are real numbers. 3. The Gibbs State of an Electron The set of all states of an electron can be embedded into the set of positive semi-definite operators with trace 1 by associating any unit vector ϕ C N with the orthogonal projection onto the 1-dimensional space spanned by ϕ. Therefore we call any positive semi-definite operator Z : C N C N with trace 1 (generalized) quantum mechanical state of an electron sited within lattice L a. Let us consider Hamiltonian H introduced in Section 2. Let T > 0 denote the temperature of lattice L a. The operator G T : C N C N modeling the Gibbs state of an electron is given by (3.1) G T = 1 Z(T ) exp where (3.2) Z(T ) := trace ( 1 k B T H ) ( ( exp 1 )) k B T H denotes the partition function and k B the Boltzmann constant. G T is a positive operator whose trace is equal to 1. Operator G T is motivated by the entropy principle (cf. [5, p. 384) and describes the thermal equilibrium state of an electron confined within lattice L a with the interpretation of the diagonal entry G T (n, n) as the probability of spatial association of the electron with lattice point na. 3.1 Remark: All entries in the matrix describing operator G T cf. Remark 2.2. are real numbers; 3.2 Example: Put N = 2000, N L1 = 400, N R1 = 989, N L2 = 1011, N R2 = 1600, D = 0.1V, T = 300K and a = m. Note that m= 1Å corresponds

119 to the typical order of magnitude for the distance between adjacent ions in a metal. 5 Fig. 1: Gibbsian probabilities In Figure 1 the horizontal axis corresponds to lattice L a. The graph shows the probabilities G T (n, n) of finding an electron at lattice points na, n = 1,..., N. The diagram suggests that the electron prefers sites with low potential energy w.r.t. potential U; an attentive look at Fig. 1 reveals, moreover, that the electron tunnels through the gap between the two pieces of metal. Both observations underline the physical plausibility of the simplified quantum mechanical description of an electron confined within lattice L a. 4. The Dispersion of the Electronic Velocity Operator The operator v : C N C N, v := p m, describes the velocity of an electron sited within lattice L a where p is the momentum operator introduced in Section 2.

120 The quantum mechanical expectation E qm ( v) of the velocity of an electron whose state is described by G T, is given by (4.1) E qm ( v) = trace(g T v). From Linear Algebra it is known that the trace in (4.1) is a real number. Since the matrix describing operator G T is real and the Hermitian matrix corresponding to operator v is purely imaginary, the trace in (4.1) is imaginary; we conclude that (4.2) E qm ( v) = 0 holds for arbitrary T > 0. This means that there is no direct electronic current in the considered arrangement, which is physically plausible. (4.2) implies, moreover, that the quantum mechanical variance V qm ( v) of the velocity operator can be computed according to (4.3) V qm ( v) = trace(g T v 2 ). The quantum mechanical dispersion of velocity operator v is defined by D qm ( v) := V qm ( v); this dispersion can be viewed as an indicator of the thermal noise level inherent in the quantum mechanical arrangement. 4.1 Example Put N = 1000, a = m, T = 300K, D = 0.5V. In a computer experiment we fix the parameters N R1 N L1 = N R2 N L2 = 300 of potential U imposed on lattice L a (cf. Section 2); this means that we fix the lengths L 1 = L 2 = m of two pieces of a metal. The width d a of the gap between the pieces is varied according to d = N L2 N R1 = 0, 1,..., 160. For each choice of d the corresponding dispersion value of the velocity operator is computed; note that in the case d = 0 the two pieces of metal are joined together and for d = 160 the maximal width m of the gap is attained.

121 Fig. 2: Dispersion of the velocity operator In Figure 2 the horizontal axis corresponds to the width of the gap between two pieces of metal (the physical unit is m) and the vertical axis to the quantum mechanical dispersion of the velocity operator (the physical unit is m/s). The diagram shows that the dispersion of electronic velocity increases for increasing width of the gap. Note that the Maxwellian dispersion of thermal velocity of electrons at T = 300K is m/s. 5. An Empirical Pendant of the Computational Result Example 4.1 suggests that the thermal noise level of electrons positioned within two pieces of a metal should increase if a small gap is arranged between them. Figure 3 shows an aluminium cylinder separated from an aluminium cube by a narrow gap. The thermal voltage signal between the cylinder and the cube is visualized on the screen of an oscilloscope. According to our experience, the measured amplitude of the thermal voltage signal exceeds 10 mv and becomes significantly smaller when cylinder and cube are connected. This observation can be interpreted as an empirical pendant of the computer experimental

122 result based on a simplified quantum mechanical model and reported in Section 4. Fig. 3: Thermal noise amplification by a slack joint 5.1 Remark: Classical Thermodynamics seems to be unable to explain the phenomenon illustrated in Figure 3. We believe, however, that a quantitative explanation of the amplification of thermal noise by a slack joint between two pieces of a metal would be a challenging objective for Quantum Thermodynamics based on Random Hamiltonians.

123 Acknowledgment The authors would like to thank Leonid Pastur from Kharkiv / Ukraine for stimulating discussions on the subject of the contribution. References [1 H.L. Cycon, R.G. Froese, W. Kirsch, B. Simon, Schrödinger operators. Springer, Berlin, Heidelberg, New York (2008). [2 M. Disertori, W. Kirsch, A. Klein, F. Klopp, V. Rivasseau, Random Schrödinger Operators. Panoramas et Syntheses, Societe Mathematique de France, Paris (2008). [3 E. Grycko, W. Kirsch, T. Mühlenbruch, Amplification of thermal noise by an electrostatic field. Int. J. Pure Appl. Math 61, No. 2, (2010), [4 E. Grycko, W. Kirsch, T. Mühlenbruch, Some quantum mechanical evidence for the amplification of thermal noise in an electrostatic field. Int. J. Pure Appl. Math. 69, No. 4, (2011), [5 W. Thirring, Quantum Mathematical Physics. Second edition, Springer, Berlin, Heidelberg, New York (2002). Eingegangen am:

124

125 ON THE THERMAL ANGULAR MOMENTUM OF THE ELECTRON GAS Eugen Grycko 1, Werner Kirsch 2, Tobias Mühlenbruch 3 1,2,3 Department of Mathematics and Computer Science University of Hagen Universitätsstrasse 1 D Hagen, GERMANY eugen.grycko@fernuni-hagen.de werner.kirsch@fernuni-hagen.de tobias.muehlenbruch@fernuni-hagen.de Abstract: We consider the electron gas within a virtual conductor which is described in terms of a modified Drude model. In the first case the conductor is ring-shaped, which entails that the total thermal angular momentum of the gas is constant. In the second case the conductor is angled; a stochastic process for the description of the total thermal angular momentum is proposed and explored in a computer experiment. It turns out that the autocovariance of the proposed process tends to decrease for increasing time lag. Experimental pendants of the differing behaviors of the autocovariance function are reported; these pendants are obtained by visualizing and comparing appropriate noise signals on the screen of an oscilloscope. Key Words: mass point, orbital angular momentum, thermal equilibrium distribution. 1. Introduction According to the Drude model, the electrons in a metallic conductor constitute a gas of charged mass points that are subject to thermal motion. In a modified Drude model which has been introduced in [1, a momentary state of the electron gas is described by the Gaussian distribution (of velocities) whose variance is interrelated with the temperature of the conductor. In [5, sec. 2.3, this kind of an ideal gas in a container has been studied in a computer experiment whose statistical evaluation has led to the establishment 1

126 of an equation of state of the gas interrelating pressure with temperature. In the simulation experiment Newtonian dynamics has been imposed on the micro-constituents of the gas and the momentum transferred to the walls of the container has been sampled with the objective of estimating pressure. In [1 and [4 the modified Drude model has been applied for the derivation of a description of the thermal voltage in a conductor. The model considered there is related to a conductor with the shape of a hyper-rectangle. In the present contribution we consider 2-dimensional electron gas whose micro-constituents obey the Newtonian dynamics. We study the influence of the shape of a conductor on the thermal voltage signal. In Section 2 a ring-shaped conductor is considered; it is pointed out that the total thermal angular momentum of the electron gas is constant, which suggests a persistence of the direction of the rotational thermal motion. In Section 3 an angled conductor is introduced; a stochastic process is proposed for the description of the total thermal angular momentum of the electrons as function of time. The autocovariance of the process is studied based on a computer experiment. It turns out that the autocovariance tends to decrease as function of the time lag, which suggests that the direction of the thermal motion of the electrons is not persistent in the case of an angled conductor. In Section 4 a laboratory experiment is presented whose outcome complies with the results presented in Sections 2 and The Case of a Ring-Shaped Container Let us consider a ring-shaped container C 1 := {(x 1, x 2 ) R 2 r 2 1 x x 2 2 R 2 1} where R 1 > r 1 > 0. C 1 is filled with a gas consisting of N mass points of mass m > 0. The initial positions x (1) (0),..., x (N) (0) C 1 of the points are generated according to the uniform distribution over C 1 and the initial velocities v (1) (0),..., v (N) (0) R 2 according to the centered Gaussian distribution N(0, σ 2 I 2 ) where I 2 denotes the 2 2-identity matrix and parameter σ > 0 is interpreted thermally according to (2.1) σ 2 = k B T m 2

127 where k B denotes the Boltzmann constant and T > 0 the temperature of the gas, cf. [6. Let us suppose that the mass points do not interact and evolve according to the Newtonian dynamics; this means that the positions x (j) (t) as functions of time t are described by (2.2) x (j) (t) = x (j) (t 0 ) + (t t 0 ) v (j) (t 0 ) for j = 1,..., N and t > t 0 0 if no reflection of a mass point occurs at the boundary C 1 of C 1 in the time interval [t 0, t). If mass point j arrives at the boundary C 1 at time point t 1, then the velocity component orthogonal to C 1 is reflected: (2.3) v (j) (t 1 +) = v (j) (t 1 ) 2 v (j) (t 1 ), u u where.,. denotes the standard scalar product on R 2 and u = x (j) (t 1 ) x (j) (t 1 ), x (j) (t 1 ) 1/2 is an unit vector orthogonal to C 1 at x (j) (t 1 ). (2.2) and (2.3) imply that the orbital angular momentum ( ) (2.4) L (j) (t) := m x (j) 1 (t) v (j) 2 (t) x (j) 2 (t) v (j) 1 (t) of the j th mass point is invariant along its trajectory ( x (j) (t), v (j) (t) ) t 0 for j = 1,..., N. It follows that the total thermal angular momentum of the gas (2.5) L(t) := N L (j) (t) j=1 is constant and can be determined from the initial data ( x (1) (0),..., x (N) (0); v (1) (0),..., v (N) (0) ) C N 1 R 2N. Since the initial data are random, we can interpret the total thermal angular momentum of the gas as a stochastic process (L(t)) t 0 (cf. (2.4) and (2.5)). 3

128 The symmetry of the distributions of x (j) (0) and of v (j) (0) entails that (2.6) E(L(t)) = 0 (t R + ) holds where E denotes expectation. Standard computations applying the rotational symmetry of C 1 and of the distribution of v (j) (0) yield for the variance Var ( L (j) (0) ) ( ( ) 2 ( ) ) 2 (2.7) = m 2 E x (j) 1 (0) v (j) 2 (0) + x (j) 2 (0) v (j) 1 (0) (2.8) = m 2 (R r 2 1) σ 2 for j = 1,..., N. The stochastic independence of the position and velocity vectors of the mass points implies the following formula 2 (2.9) Var (L(t)) = N Var ( L (j) (t) ) = N 2 m2 (R ) r1 2 σ 2 j=1 for the variance of the total thermal angular momentum of the gas at time t Example Put N = 10 4 (number of mass points), T = 300K (room temperature), m = kg (mass of an electron), and ϱ = m 2 = (ϱ Cu ) 2/3 where ϱ Cu = m 3 is the density of the electronic gas in copper (cf. [3). Put r 1 = m and R 1 = m for the parameters r 1 and R 1 of ring C 1. If N mass points are injected into C 1, then ϱ is the density of the gas. 4

129 Figure 1: Ring C 1 filled with electron gas According to (2.9) the variance Var(L(t)) of the total thermal angular momentum of the gas is (2.10) Var(L(t)) = N 2 m (R r 2 1 for t 0. ) kb T = kg2 m 4 In our simplified model the total thermal angular momentum L(t) does not alter with time t R + ; this observation entails the formula (2.11) cov(l(s), L(t)) = cov(l(0), L(t s)) = Var(L(0)) (0 s t) for the autocovariance of the process (L(t)) t R+. s 2 3. The Case of an Angled Container Throughout this section we fix the number N = 10 4 of electrons and the density ϱ = m 2 of the electronic gas (cf. Example 2.1). We consider container C 2 defined by C 2 = {(x 1, x 2 ) R 2 r 2 max( x 1, x 2 ) R 2 } 5

130 where parameters r 2 and R 2 are determined by the conditions r 2 = 0.8 R 2 and N 4 (R2 2 r2) = N 2 vol(c 2 ) = ϱ. Figure 2: Container C 2 filled with electron gas We inject N mass points (electrons) of mass m = kg into C 2 by generating the initial positions x (1) (0),..., x (N) (0) C 2 according to the uniform distribution over C 2 and the initial velocities v (1) (0),..., v (N) (0) R 2 according to the Gaussian distribution N(0, σ 2 I 2 ) whose parameter σ corresponds to the temperature T = 300 K, cf. (2.1). Again, we impose the Newtonian dynamics on the system of N non-interacting mass points and compute the trajectory ( x (1) (t),..., x (N) (t); v (1) (t),..., v (N) (t) ) t 0 in the phase space C N 2 R 2N. Contrary to the the ring-shaped case, the total thermal angular momentum L(t) = m N j=1 ( ) x (j) 1 (t) v (j) 2 (t) x (j) 2 (t) v (j) 1 (t) 6

131 is not constant over time t because the mass points exchange angular momentum with the boundary C 2 during reflections. Put t = s. We interpret (L(t)) t 0 as a trajectory of a stationary and α-mixing stochastic process (cf. [4) and sample its values X k := L( t k) (k = 0, 1,..., K = 10 5 ) during a computer experiment imitating the thermal motion of the electrons within C 2. Due to the symmetry of C 2 and of the distribution of the momentary velocities v (1) (t),..., v (N) (t) for t 0, we can assume for k = 0, 1,.... E(X k ) = 0 Figure 3: A trajectory of the process L(t) In the diagram in Figure 3 the horizontal axis corresponds to computer experimental time (the unit is s); the vertical axis corresponds to total angular momentum of the electron gas (the unit is kg m 2 /s). The diagram shows in particular that the total thermal angular momentum fluctuates. 7

132 Under mild assumptions on the process (X k ) (cf. [4, sec. 2) the autocovariance function γ(l) := cov(x k, X k+l ) = E (X k X k+l ) (0 l K) can be consistently estimated by K l 1 (3.1) γ(l) := K l + 1 X k X k+l for 0 l << K, cf.[4. k=0 Figure 4: Estimate of the autocovariance of the process L(t) In Figure 4 the horizontal axis corresponds to the time lag (the unit is s); the vertical axis corresponds to covariance of angular momentum (the unit is kg 2 m 4 /s 2 ). The diagram shows an estimate of the autocovariance function of the process (L(t)) t 0 obtained by application of estimator γ to computer experimental data. The diagram confirms the fact that the autocovariance tends to decrease for increasing time lag in the considered case of an angled container. 8

133 Experimental Pendants of the Results Let us consider two samples of a copper wire of length 1500 m and diameter 0.05 mm each. The first sample is arranged as a solenoid being a pendant of C1 ; the second sample is arranged irregularly and corresponds to C2. Figure 5: A regular and an irregular arrangement of a wire Figure 5 shows the measurements of the thermal voltage signals between the ends of the regularly and the irregularly arranged copper wires by an oscilloscope. The upper diagram on the screen of the oscilloscope shows the thermal voltage signal in the solenoid; the amplitude of the thermal noise exceeds 15 mv. The lower diagram on the screen shows the thermal voltage signal in the irregularly arranged wire; its amplitude is significantly smaller than that for the case of the solenoid. These observations comply qualitatively with the theoretical and simulation results obtained and discussed in Sections 2 and 3. 9

arxiv: v1 [math.mg] 6 Dec 2016

arxiv: v1 [math.mg] 6 Dec 2016 Measures and geometric probabilities for ellipses intersecting circles Uwe Bäsel arxiv:1612.01819v1 [math.mg] 6 Dec 2016 Abstract Santaló calculated the measures for all positions of a moving line segment

More information

Toric Deformation of the Hankel Variety

Toric Deformation of the Hankel Variety Applied Mathematical Sciences, Vol. 10, 2016, no. 59, 2921-2925 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2016.6248 Toric Deformation of the Hankel Variety Adelina Fabiano DIATIC - Department

More information

Riassunto. Si risolvono problemi di tipo Buffon per un corpo test arbitrario e una speciale configurazione di linee nel piano Euclideo.

Riassunto. Si risolvono problemi di tipo Buffon per un corpo test arbitrario e una speciale configurazione di linee nel piano Euclideo. Acc. Sc. Torino Atti Sc. Fis. 4 (26), 83-9. GEOMETRIA Riassunto. Si risolvono problemi di tipo Buffon per un corpo test arbitrario e una speciale configurazione di linee nel piano Euclideo. Abstract. We

More information

The chord length distribution function of a non-convex hexagon

The chord length distribution function of a non-convex hexagon Communications in Applied and Industrial Mathematics ISSN 238-99 Commun. Appl. Ind. Math. 9 1, 218, 2 34 Research article DOI: 1.1515/caim-218-2 The chord length distribution function of a non-convex hexagon

More information

F(jω) = a(jω p 1 )(jω p 2 ) Û Ö p i = b± b 2 4ac. ω c = Y X (jω) = 1. 6R 2 C 2 (jω) 2 +7RCjω+1. 1 (6jωRC+1)(jωRC+1) RC, 1. RC = p 1, p

F(jω) = a(jω p 1 )(jω p 2 ) Û Ö p i = b± b 2 4ac. ω c = Y X (jω) = 1. 6R 2 C 2 (jω) 2 +7RCjω+1. 1 (6jωRC+1)(jωRC+1) RC, 1. RC = p 1, p ÓÖ Ò ÊÄ Ò Ò Û Ò Ò Ö Ý ½¾ Ù Ö ÓÖ ÖÓÑ Ö ÓÒ Ò ÄÈ ÐØ Ö ½¾ ½¾ ½» ½½ ÓÖ Ò ÊÄ Ò Ò Û Ò Ò Ö Ý ¾ Á b 2 < 4ac Û ÒÒÓØ ÓÖ Þ Û Ö Ð Ó ÒØ Ó Û Ð Ú ÕÙ Ö º ËÓÑ Ñ ÐÐ ÕÙ Ö Ö ÓÒ Ò º Ù Ö ÓÖ ½¾ ÓÖ Ù Ö ÕÙ Ö ÓÖ Ò ØÖ Ò Ö ÙÒØ ÓÒ

More information

On self-circumferences in Minkowski planes

On self-circumferences in Minkowski planes extracta mathematicae Article in press On self-circumferences in Minkowski planes Mostafa Ghandehari, Horst Martini Department of Mathematics, University of Texas at Arlington, TX 76019, U.S.A. Faculty

More information

INTERSECTIONS OF RANDOM LINES

INTERSECTIONS OF RANDOM LINES RENDICONTI DEL CIRCOLO MATEMATICO DI PALERMO Serie II, Suppl. 65 (2000) pp.67-77. INTERSECTIONS OF RANDOM LINES Rodney Coleman Imperial College of Science Technology and Medicine, University of London

More information

2 Hallén s integral equation for the thin wire dipole antenna

2 Hallén s integral equation for the thin wire dipole antenna Ú Ð Ð ÓÒÐ Ò Ø ØØÔ»» Ѻ Ö Ùº º Ö ÁÒغ º ÁÒ Ù ØÖ Ð Å Ø Ñ Ø ÎÓк ÆÓº ¾ ¾¼½½µ ½ ¹½ ¾ ÆÙÑ Ö Ð Ñ Ø Ó ÓÖ Ò ÐÝ Ó Ö Ø ÓÒ ÖÓÑ Ø Ò Û Ö ÔÓÐ ÒØ ÒÒ Ëº À Ø ÑÞ ¹Î ÖÑ ÞÝ Ö Åº Æ Ö¹ÅÓ Êº Ë Þ ¹Ë Ò µ Ô ÖØÑ ÒØ Ó Ð ØÖ Ð Ò Ò

More information

Invariant measure and geometric probability

Invariant measure and geometric probability Proceedings of The Twelfth International Workshop on Diff. Geom. 12(2008) 21-31 Invariant measure and geometric probability Jiazu Zhou, Min Chang and Fei Cheng School of Mathematics and Statistics, Southwest

More information

Groebner Bases, Toric Ideals and Integer Programming: An Application to Economics. Tan Tran Junior Major-Economics& Mathematics

Groebner Bases, Toric Ideals and Integer Programming: An Application to Economics. Tan Tran Junior Major-Economics& Mathematics Groebner Bases, Toric Ideals and Integer Programming: An Application to Economics Tan Tran Junior Major-Economics& Mathematics History Groebner bases were developed by Buchberger in 1965, who later named

More information

arxiv: v2 [math.dg] 7 May 2016

arxiv: v2 [math.dg] 7 May 2016 THE IMPROVED ISOPERIMETRIC INEQUALITY AND THE WIGNER CAUSTIC OF PLANAR OVALS arxiv:151.6684v [math.dg] 7 May 16 MICHA L ZWIERZYŃSKI Abstract. The classical isoperimetric inequality in the Euclidean plane

More information

ÇÙÐ Ò ½º ÅÙÐ ÔÐ ÔÓÐÝÐÓ Ö Ñ Ò Ú Ö Ð Ú Ö Ð ¾º Ä Ò Ö Ö Ù Ð Ý Ó ËÝÑ ÒÞ ÔÓÐÝÒÓÑ Ð º Ì ÛÓ¹ÐÓÓÔ ÙÒÖ Ö Ô Û Ö Ö ÖÝ Ñ ¹ ÝÓÒ ÑÙÐ ÔÐ ÔÓÐÝÐÓ Ö Ñ

ÇÙÐ Ò ½º ÅÙÐ ÔÐ ÔÓÐÝÐÓ Ö Ñ Ò Ú Ö Ð Ú Ö Ð ¾º Ä Ò Ö Ö Ù Ð Ý Ó ËÝÑ ÒÞ ÔÓÐÝÒÓÑ Ð º Ì ÛÓ¹ÐÓÓÔ ÙÒÖ Ö Ô Û Ö Ö ÖÝ Ñ ¹ ÝÓÒ ÑÙÐ ÔÐ ÔÓÐÝÐÓ Ö Ñ ÅÙÐ ÔÐ ÔÓÐÝÐÓ Ö Ñ Ò ÝÒÑ Ò Ò Ö Ð Ö Ò Ó Ò Ö ÀÍ ÖÐ Òµ Ó Ò ÛÓÖ Û Ö Ò ÖÓÛÒ Ö Ú ½ ¼¾º ¾½ Û Åº Ä Ö Ö Ú ½ ¼¾º ¼¼ Û Äº Ñ Ò Ëº Ï ÒÞ ÖÐ Å ÒÞ ½ º¼ º¾¼½ ÇÙÐ Ò ½º ÅÙÐ ÔÐ ÔÓÐÝÐÓ Ö Ñ Ò Ú Ö Ð Ú Ö Ð ¾º Ä Ò Ö Ö Ù Ð Ý Ó ËÝÑ

More information

Arbeitstagung: Gruppen und Topologische Gruppen Vienna July 6 July 7, Abstracts

Arbeitstagung: Gruppen und Topologische Gruppen Vienna July 6 July 7, Abstracts Arbeitstagung: Gruppen und Topologische Gruppen Vienna July 6 July 7, 202 Abstracts ÁÒÚ Ö Ð Ñ Ø Ó Ø¹Ú ÐÙ ÙÒØ ÓÒ ÁÞØÓ Ò ÞØÓ º Ò ÙÒ ¹Ñ º ÙÐØÝ Ó Æ ØÙÖ Ð Ë Ò Ò Å Ø Ñ Ø ÍÒ Ú Ö ØÝ Ó Å Ö ÓÖ ÃÓÖÓ ½ ¼ Å Ö ÓÖ ¾¼¼¼

More information

arxiv: v1 [math.mg] 25 Apr 2016

arxiv: v1 [math.mg] 25 Apr 2016 The mean width of the oloid and integral geometric applications of it Uwe Bäsel arxiv:64.745v math.mg] 5 Apr 6 Abstract The oloid is the convex hull of two circles with equal radius in perpendicular planes

More information

Lecture 16: Modern Classification (I) - Separating Hyperplanes

Lecture 16: Modern Classification (I) - Separating Hyperplanes Lecture 16: Modern Classification (I) - Separating Hyperplanes Outline 1 2 Separating Hyperplane Binary SVM for Separable Case Bayes Rule for Binary Problems Consider the simplest case: two classes are

More information

Problems on Minkowski sums of convex lattice polytopes

Problems on Minkowski sums of convex lattice polytopes arxiv:08121418v1 [mathag] 8 Dec 2008 Problems on Minkowski sums of convex lattice polytopes Tadao Oda odatadao@mathtohokuacjp Abstract submitted at the Oberwolfach Conference Combinatorial Convexity and

More information

b = 2, c = 3, we get x = 0.3 for the positive root. Ans. (D) x 2-2x - 8 < 0, or (x - 4)(x + 2) < 0, Therefore -2 < x < 4 Ans. (C)

b = 2, c = 3, we get x = 0.3 for the positive root. Ans. (D) x 2-2x - 8 < 0, or (x - 4)(x + 2) < 0, Therefore -2 < x < 4 Ans. (C) SAT II - Math Level 2 Test #02 Solution 1. The positive zero of y = x 2 + 2x is, to the nearest tenth, equal to (A) 0.8 (B) 0.7 + 1.1i (C) 0.7 (D) 0.3 (E) 2.2 ± Using Quadratic formula, x =, with a = 1,

More information

ISOPERIMETRIC INEQUALITY FOR FLAT SURFACES

ISOPERIMETRIC INEQUALITY FOR FLAT SURFACES Proceedings of The Thirteenth International Workshop on Diff. Geom. 3(9) 3-9 ISOPERIMETRIC INEQUALITY FOR FLAT SURFACES JAIGYOUNG CHOE Korea Institute for Advanced Study, Seoul, 3-7, Korea e-mail : choe@kias.re.kr

More information

AN INTRODUCTION TO AFFINE TORIC VARIETIES: EMBEDDINGS AND IDEALS

AN INTRODUCTION TO AFFINE TORIC VARIETIES: EMBEDDINGS AND IDEALS AN INTRODUCTION TO AFFINE TORIC VARIETIES: EMBEDDINGS AND IDEALS JESSICA SIDMAN. Affine toric varieties: from lattice points to monomial mappings In this chapter we introduce toric varieties embedded in

More information

SKMM 3023 Applied Numerical Methods

SKMM 3023 Applied Numerical Methods SKMM 3023 Applied Numerical Methods Solution of Nonlinear Equations ibn Abdullah Faculty of Mechanical Engineering Òº ÙÐÐ ÚºÒÙÐÐ ¾¼½ SKMM 3023 Applied Numerical Methods Solution of Nonlinear Equations

More information

Hamburger Beiträge zur Angewandten Mathematik

Hamburger Beiträge zur Angewandten Mathematik Hamburger Beiträge zur Angewandten Mathematik Shift Generated Haar Spaces on Track Fields Dedicated to the memory of Walter Hengartner Gerhard Opfer Will be published in Proceedings of a conference on

More information

Problem 1 (From the reservoir to the grid)

Problem 1 (From the reservoir to the grid) ÈÖÓ º ĺ ÙÞÞ ÐÐ ÈÖÓ º ʺ ³ Ò Ö ½ ½¹¼ ¼¹¼¼ ËÝ Ø Ñ ÅÓ Ð Ò ÀË ¾¼½ µ Ü Ö ÌÓÔ ÀÝ ÖÓ Ð ØÖ ÔÓÛ Ö ÔÐ ÒØ À Èȵ ¹ È ÖØ ÁÁ Ð ÖÒ Ø Þº ÇØÓ Ö ½ ¾¼½ Problem (From the reservoir to the grid) The causality diagram of the

More information

ANSWER TO A QUESTION BY BURR AND ERDŐS ON RESTRICTED ADDITION, AND RELATED RESULTS Mathematics Subject Classification: 11B05, 11B13, 11P99

ANSWER TO A QUESTION BY BURR AND ERDŐS ON RESTRICTED ADDITION, AND RELATED RESULTS Mathematics Subject Classification: 11B05, 11B13, 11P99 ANSWER TO A QUESTION BY BURR AND ERDŐS ON RESTRICTED ADDITION, AND RELATED RESULTS N. HEGYVÁRI, F. HENNECART AND A. PLAGNE Abstract. We study the gaps in the sequence of sums of h pairwise distinct elements

More information

LATTICE POINT COVERINGS

LATTICE POINT COVERINGS LATTICE POINT COVERINGS MARTIN HENK AND GEORGE A. TSINTSIFAS Abstract. We give a simple proof of a necessary and sufficient condition under which any congruent copy of a given ellipsoid contains an integral

More information

1966 IMO Shortlist. IMO Shortlist 1966

1966 IMO Shortlist. IMO Shortlist 1966 IMO Shortlist 1966 1 Given n > 3 points in the plane such that no three of the points are collinear. Does there exist a circle passing through (at least) 3 of the given points and not containing any other

More information

2016 EF Exam Texas A&M High School Students Contest Solutions October 22, 2016

2016 EF Exam Texas A&M High School Students Contest Solutions October 22, 2016 6 EF Exam Texas A&M High School Students Contest Solutions October, 6. Assume that p and q are real numbers such that the polynomial x + is divisible by x + px + q. Find q. p Answer Solution (without knowledge

More information

The Stong Isoperimetric Inequality of Bonnesen

The Stong Isoperimetric Inequality of Bonnesen Department of Mathematics Undergraduate Colloquium University of Utah January, 006 The Stong Isoperimetric Inequality of Bonnesen Andres Treibergs University of Utah Among all simple closed curves in the

More information

1. The positive zero of y = x 2 + 2x 3/5 is, to the nearest tenth, equal to

1. The positive zero of y = x 2 + 2x 3/5 is, to the nearest tenth, equal to SAT II - Math Level Test #0 Solution SAT II - Math Level Test No. 1. The positive zero of y = x + x 3/5 is, to the nearest tenth, equal to (A) 0.8 (B) 0.7 + 1.1i (C) 0.7 (D) 0.3 (E). 3 b b 4ac Using Quadratic

More information

Check boxes of Edited Copy of Sp Topics (was 261-pilot)

Check boxes of Edited Copy of Sp Topics (was 261-pilot) Check boxes of Edited Copy of 10023 Sp 11 253 Topics (was 261-pilot) Intermediate Algebra (2011), 3rd Ed. [open all close all] R-Review of Basic Algebraic Concepts Section R.2 Ordering integers Plotting

More information

THE BONNESEN-TYPE INEQUALITIES IN A PLANE OF CONSTANT CURVATURE

THE BONNESEN-TYPE INEQUALITIES IN A PLANE OF CONSTANT CURVATURE J Korean Math Soc 44 007), No 6, pp 1363 137 THE BONNESEN-TYPE INEQUALITIES IN A PLANE OF CONSTANT CURVATURE Jiazu Zhou and Fangwei Chen Reprinted from the Journal of the Korean Mathematical Society Vol

More information

Affine surface area and convex bodies of elliptic type

Affine surface area and convex bodies of elliptic type Affine surface area and convex bodies of elliptic type Rolf Schneider Abstract If a convex body K in R n is contained in a convex body L of elliptic type (a curvature image), then it is known that the

More information

Some isoperimetric inequalities with application to the Stekloff problem

Some isoperimetric inequalities with application to the Stekloff problem Some isoperimetric inequalities with application to the Stekloff problem by A. Henrot, Institut Élie Cartan, UMR7502 Nancy Université - CNRS - INRIA, France, e-mail : antoine.henrot@iecn.u-nancy.fr. G.A.

More information

Banach Journal of Mathematical Analysis ISSN: (electronic)

Banach Journal of Mathematical Analysis ISSN: (electronic) Banach J. Math. Anal. 2 (2008), no., 70 77 Banach Journal of Mathematical Analysis ISSN: 735-8787 (electronic) http://www.math-analysis.org WIDTH-INTEGRALS AND AFFINE SURFACE AREA OF CONVEX BODIES WING-SUM

More information

Algebraic Models in Different Fields

Algebraic Models in Different Fields Applied Mathematical Sciences, Vol. 8, 2014, no. 167, 8345-8351 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.411922 Algebraic Models in Different Fields Gaetana Restuccia University

More information

On closed Weingarten surfaces

On closed Weingarten surfaces On closed Weingarten surfaces Wolfgang Kühnel and Michael Steller Abstract: We investigate closed surfaces in Euclidean 3-space satisfying certain functional relations κ = F (λ) between the principal curvatures

More information

SME 3023 Applied Numerical Methods

SME 3023 Applied Numerical Methods UNIVERSITI TEKNOLOGI MALAYSIA SME 3023 Applied Numerical Methods Solution of Nonlinear Equations Abu Hasan Abdullah Faculty of Mechanical Engineering Sept 2012 Abu Hasan Abdullah (FME) SME 3023 Applied

More information

Laplace Type Problem with Non-uniform Distribution

Laplace Type Problem with Non-uniform Distribution Applied Mathematical Sciences, Vol. 1, 16, no. 3, 1595-16 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.1988/ams.16.66 Laplace Type Problem with Non-uniform Distribution Giuseppe Caristi Department

More information

Circumscribed Polygons of Small Area

Circumscribed Polygons of Small Area Discrete Comput Geom (2009) 41: 583 589 DOI 10.1007/s00454-008-9072-z Circumscribed Polygons of Small Area Dan Ismailescu Received: 8 October 2007 / Revised: 27 February 2008 / Published online: 28 March

More information

SOME NEW BONNESEN-STYLE INEQUALITIES

SOME NEW BONNESEN-STYLE INEQUALITIES J Korean Math Soc 48 (2011), No 2, pp 421 430 DOI 104134/JKMS2011482421 SOME NEW BONNESEN-STYLE INEQUALITIES Jiazu Zhou, Yunwei Xia, and Chunna Zeng Abstract By evaluating the containment measure of one

More information

Buffon-Laplace Type Problem for an Irregular Lattice

Buffon-Laplace Type Problem for an Irregular Lattice Applied Mathematical Sciences Vol. 11 17 no. 15 731-737 HIKARI Ltd www.m-hikari.com https://doi.org/1.1988/ams.17.783 Buffon-Laplace Type Problem for an Irregular Lattice Ersilia Saitta Department of Economics

More information

HILBERT BASIS OF THE LIPMAN SEMIGROUP

HILBERT BASIS OF THE LIPMAN SEMIGROUP Available at: http://publications.ictp.it IC/2010/061 United Nations Educational, Scientific and Cultural Organization and International Atomic Energy Agency THE ABDUS SALAM INTERNATIONAL CENTRE FOR THEORETICAL

More information

2016 OHMIO Individual Competition

2016 OHMIO Individual Competition 06 OHMIO Individual Competition. Taylor thought of three positive integers a, b, c, all between and 0 inclusive. The three integers form a geometric sequence. Taylor then found the number of positive integer

More information

On the Length of Lemniscates

On the Length of Lemniscates On the Length of Lemniscates Alexandre Eremenko & Walter Hayman For a monic polynomial p of degree d, we write E(p) := {z : p(z) =1}. A conjecture of Erdős, Herzog and Piranian [4], repeated by Erdős in

More information

Poncelet s porism and periodic triangles in ellipse 1

Poncelet s porism and periodic triangles in ellipse 1 Poncelet s porism and periodic triangles in ellipse 1 Vladimir Georgiev, Veneta Nedyalkova 1 Small historical introduction One of the most important and beautiful theorems in projective geometry is that

More information

Journal of Algebra 226, (2000) doi: /jabr , available online at on. Artin Level Modules.

Journal of Algebra 226, (2000) doi: /jabr , available online at   on. Artin Level Modules. Journal of Algebra 226, 361 374 (2000) doi:10.1006/jabr.1999.8185, available online at http://www.idealibrary.com on Artin Level Modules Mats Boij Department of Mathematics, KTH, S 100 44 Stockholm, Sweden

More information

Convergence of a Generalized Midpoint Iteration

Convergence of a Generalized Midpoint Iteration J. Able, D. Bradley, A.S. Moon under the supervision of Dr. Xingping Sun REU Final Presentation July 31st, 2014 Preliminary Words O Rourke s conjecture We begin with a motivating question concerning the

More information

Bulletin of the. Iranian Mathematical Society

Bulletin of the. Iranian Mathematical Society ISSN: 1017-060X (Print) ISSN: 1735-8515 (Online) Bulletin of the Iranian Mathematical Society Vol. 41 (2015), No. 3, pp. 581 590. Title: Volume difference inequalities for the projection and intersection

More information

MATH32062 Notes. 1 Affine algebraic varieties. 1.1 Definition of affine algebraic varieties

MATH32062 Notes. 1 Affine algebraic varieties. 1.1 Definition of affine algebraic varieties MATH32062 Notes 1 Affine algebraic varieties 1.1 Definition of affine algebraic varieties We want to define an algebraic variety as the solution set of a collection of polynomial equations, or equivalently,

More information

Margin Maximizing Loss Functions

Margin Maximizing Loss Functions Margin Maximizing Loss Functions Saharon Rosset, Ji Zhu and Trevor Hastie Department of Statistics Stanford University Stanford, CA, 94305 saharon, jzhu, hastie@stat.stanford.edu Abstract Margin maximizing

More information

Problem 1 (From the reservoir to the grid)

Problem 1 (From the reservoir to the grid) ÈÖÓ º ĺ ÙÞÞ ÐÐ ÈÖÓ º ʺ ³ Ò Ö ½ ½¹¼ ¹¼¼ ËÝ Ø Ñ ÅÓ Ð Ò ÀË ¾¼½ µ Ü Ö ËÓÐÙØ ÓÒ ÌÓÔ ÀÝ ÖÓ Ð ØÖ ÔÓÛ Ö ÔÐ ÒØ À Èȵ ¹ È ÖØ ÁÁ Ð ÖÒ Ø Þº ÇØÓ Ö ¾ ¾¼½ Problem 1 (From the reservoir to the grid) The causality diagram

More information

Minimization of Quadratic Forms in Wireless Communications

Minimization of Quadratic Forms in Wireless Communications Minimization of Quadratic Forms in Wireless Communications Ralf R. Müller Department of Electronics & Telecommunications Norwegian University of Science & Technology, Trondheim, Norway mueller@iet.ntnu.no

More information

σ-hermitian Matrices Geometries on Joint work with Andrea Blunck (Hamburg, Germany) University of Warmia and Mazury Olsztyn, November 30th, 2010

σ-hermitian Matrices Geometries on Joint work with Andrea Blunck (Hamburg, Germany) University of Warmia and Mazury Olsztyn, November 30th, 2010 Geometries on σ-hermitian Matrices Joint work with Andrea Blunck (Hamburg, Germany) University of Warmia and Mazury Olsztyn, November 30th, 2010 Scientific and Technological Cooperation Poland Austria

More information

On Polya's Orchard Problem

On Polya's Orchard Problem Rose-Hulman Undergraduate Mathematics Journal Volume 7 Issue 2 Article 9 On Polya's Orchard Problem Alexandru Hening International University Bremen, Germany, a.hening@iu-bremen.de Michael Kelly Oklahoma

More information

On closed Weingarten surfaces

On closed Weingarten surfaces On closed Weingarten surfaces Wolfgang Kühnel and Michael Steller Abstract: We investigate closed surfaces in Euclidean 3-space satisfying certain functional relations κ = F (λ) between the principal curvatures

More information

µ(, y) Computing the Möbius fun tion µ(x, x) = 1 The Möbius fun tion is de ned b y and X µ(x, t) = 0 x < y if x6t6y 3

µ(, y) Computing the Möbius fun tion µ(x, x) = 1 The Möbius fun tion is de ned b y and X µ(x, t) = 0 x < y if x6t6y 3 ÈÖÑÙØØÓÒ ÔØØÖÒ Ò Ø ÅÙ ÙÒØÓÒ ÙÖ ØÒ ÎØ ÂÐÒ Ú ÂÐÒÓÚ Ò ÐÜ ËØÒÖÑ ÓÒ ÒÖ Ì ØÛÓµ 2314 ½¾ ½ ¾ ¾½ ¾ ½ ½¾ ¾½ ½¾ ¾½ ½ Ì ÔÓ Ø Ó ÔÖÑÙØØÓÒ ÛºÖºØº ÔØØÖÒ ÓÒØÒÑÒØ ½ 2314 ½¾ ½ ¾ ¾½ ¾ ½ ½¾ ¾½ ½¾ ¾½ Ì ÒØÖÚÐ [12,2314] ½ ¾ ÓÑÔÙØÒ

More information

MORE ON THE PEDAL PROPERTY OF THE ELLIPSE

MORE ON THE PEDAL PROPERTY OF THE ELLIPSE INTERNATIONAL JOURNAL OF GEOMETRY Vol. 3 (2014), No. 1, 5-11 MORE ON THE PEDAL PROPERTY OF THE ELLIPSE I. GONZÁLEZ-GARCÍA and J. JERÓNIMO-CASTRO Abstract. In this note we prove that if a convex body in

More information

PURE MATHEMATICS AM 27

PURE MATHEMATICS AM 27 AM Syllabus (014): Pure Mathematics AM SYLLABUS (014) PURE MATHEMATICS AM 7 SYLLABUS 1 AM Syllabus (014): Pure Mathematics Pure Mathematics AM 7 Syllabus (Available in September) Paper I(3hrs)+Paper II(3hrs)

More information

Mathematics 1 Lecture Notes Chapter 1 Algebra Review

Mathematics 1 Lecture Notes Chapter 1 Algebra Review Mathematics 1 Lecture Notes Chapter 1 Algebra Review c Trinity College 1 A note to the students from the lecturer: This course will be moving rather quickly, and it will be in your own best interests to

More information

MAT1035 Analytic Geometry

MAT1035 Analytic Geometry MAT1035 Analytic Geometry Lecture Notes R.A. Sabri Kaan Gürbüzer Dokuz Eylül University 2016 2 Contents 1 Review of Trigonometry 5 2 Polar Coordinates 7 3 Vectors in R n 9 3.1 Located Vectors..............................................

More information

How does universality of coproducts depend on the cardinality?

How does universality of coproducts depend on the cardinality? Volume 37, 2011 Pages 177 180 http://topology.auburn.edu/tp/ How does universality of coproducts depend on the cardinality? by Reinhard Börger and Arno Pauly Electronically published on July 6, 2010 Topology

More information

TSI Mathematics & Statistics Test - Elementary Algebra

TSI Mathematics & Statistics Test - Elementary Algebra TSI Mathematics & Statistics Test - Elementary Algebra Querium Lesson Titles Adult Education Standards Aligned TEKS Aligned CCRS Solving Equations Using the Distributive Property Solving Equations by Combining

More information

Algebro-geometric aspects of Heine-Stieltjes theory

Algebro-geometric aspects of Heine-Stieltjes theory Dedicated to Heinrich Eduard Heine and his 4 years old riddle Algebro-geometric aspects of Heine-Stieltjes theory Boris Shapiro, Stockholm University, shapiro@math.su.se February, 9 Introduction and main

More information

Citation Osaka Journal of Mathematics. 43(2)

Citation Osaka Journal of Mathematics. 43(2) TitleIrreducible representations of the Author(s) Kosuda, Masashi Citation Osaka Journal of Mathematics. 43(2) Issue 2006-06 Date Text Version publisher URL http://hdl.handle.net/094/0396 DOI Rights Osaka

More information

L p -Width-Integrals and Affine Surface Areas

L p -Width-Integrals and Affine Surface Areas LIBERTAS MATHEMATICA, vol XXX (2010) L p -Width-Integrals and Affine Surface Areas Chang-jian ZHAO and Mihály BENCZE Abstract. The main purposes of this paper are to establish some new Brunn- Minkowski

More information

DESK Secondary Math II

DESK Secondary Math II Mathematical Practices The Standards for Mathematical Practice in Secondary Mathematics I describe mathematical habits of mind that teachers should seek to develop in their students. Students become mathematically

More information

Computing Minimal Polynomial of Matrices over Algebraic Extension Fields

Computing Minimal Polynomial of Matrices over Algebraic Extension Fields Bull. Math. Soc. Sci. Math. Roumanie Tome 56(104) No. 2, 2013, 217 228 Computing Minimal Polynomial of Matrices over Algebraic Extension Fields by Amir Hashemi and Benyamin M.-Alizadeh Abstract In this

More information

Integrated Math III. IM3.1.2 Use a graph to find the solution set of a pair of linear inequalities in two variables.

Integrated Math III. IM3.1.2 Use a graph to find the solution set of a pair of linear inequalities in two variables. Standard 1: Algebra and Functions Students solve inequalities, quadratic equations, and systems of equations. They graph polynomial, rational, algebraic, and piece-wise defined functions. They graph and

More information

INRIA Sophia Antipolis France. TEITP p.1

INRIA Sophia Antipolis France. TEITP p.1 ÌÖÙ Ø ÜØ Ò ÓÒ Ò ÓÕ Ä ÙÖ ÒØ Ì ÖÝ INRIA Sophia Antipolis France TEITP p.1 ÅÓØ Ú Ø ÓÒ Ï Ý ØÖÙ Ø Ó ÑÔÓÖØ ÒØ Å ÒÐÝ ÈÖÓÚ Ò ÌÖÙØ Ø Ò ÑÔÐ Ã Ô ÈÖÓÚ Ò ÌÖÙ Ø È Ó ÖÖÝ Ò ÈÖÓÓ µ Ò Ö ØÝ ÓÑ Ò ËÔ ÔÔÐ Ø ÓÒ TEITP p.2 ÇÙØÐ

More information

Deviation Measures and Normals of Convex Bodies

Deviation Measures and Normals of Convex Bodies Beiträge zur Algebra und Geometrie Contributions to Algebra Geometry Volume 45 (2004), No. 1, 155-167. Deviation Measures Normals of Convex Bodies Dedicated to Professor August Florian on the occasion

More information

A Short Note on Gage s Isoperimetric Inequality

A Short Note on Gage s Isoperimetric Inequality A Short Note on Gage s Isoperimetric Inequality Hong Lu Shengliang Pan Department of Mathematics, East China Normal University, Shanghai, 262, P. R. China email: slpan@math.ecnu.edu.cn December 7, 24 Abstract

More information

A Laplace Type Problems for a Lattice with Cell Composed by Three Quadrilaterals and with Maximum Probability

A Laplace Type Problems for a Lattice with Cell Composed by Three Quadrilaterals and with Maximum Probability Applied Mathematical Sciences, Vol. 8, 1, no. 165, 879-886 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.1988/ams.1.11915 A Laplace Type Problems for a Lattice with Cell Composed by Three Quadrilaterals

More information

Asymptotic Behaviour of λ-convex Sets in the Hyperbolic Plane

Asymptotic Behaviour of λ-convex Sets in the Hyperbolic Plane Geometriae Dedicata 76: 75 89, 1999. 1999 Kluwer Academic Publishers. Printed in the Netherlands. 75 Asymptotic Behaviour of λ-convex Sets in the Hyperbolic Plane EDUARDO GALLEGO and AGUSTÍ REVENTÓS Departament

More information

Center of Gravity and a Characterization of Parabolas

Center of Gravity and a Characterization of Parabolas KYUNGPOOK Math. J. 55(2015), 473-484 http://dx.doi.org/10.5666/kmj.2015.55.2.473 pissn 1225-6951 eissn 0454-8124 c Kyungpook Mathematical Journal Center of Gravity and a Characterization of Parabolas Dong-Soo

More information

y mx 25m 25 4 circle. Then the perpendicular distance of tangent from the centre (0, 0) is the radius. Since tangent

y mx 25m 25 4 circle. Then the perpendicular distance of tangent from the centre (0, 0) is the radius. Since tangent Mathematics. The sides AB, BC and CA of ABC have, 4 and 5 interior points respectively on them as shown in the figure. The number of triangles that can be formed using these interior points is () 80 ()

More information

Secondary School Certificate Examination Syllabus MATHEMATICS. Class X examination in 2011 and onwards. SSC Part-II (Class X)

Secondary School Certificate Examination Syllabus MATHEMATICS. Class X examination in 2011 and onwards. SSC Part-II (Class X) Secondary School Certificate Examination Syllabus MATHEMATICS Class X examination in 2011 and onwards SSC Part-II (Class X) 15. Algebraic Manipulation: 15.1.1 Find highest common factor (H.C.F) and least

More information

Check boxes of Edited Copy of Sp Topics (was 217-pilot)

Check boxes of Edited Copy of Sp Topics (was 217-pilot) Check boxes of Edited Copy of 10024 Sp 11 213 Topics (was 217-pilot) College Algebra, 9th Ed. [open all close all] R-Basic Algebra Operations Section R.1 Integers and rational numbers Rational and irrational

More information

3.1. Derivations. Let A be a commutative k-algebra. Let M be a left A-module. A derivation of A in M is a linear map D : A M such that

3.1. Derivations. Let A be a commutative k-algebra. Let M be a left A-module. A derivation of A in M is a linear map D : A M such that ALGEBRAIC GROUPS 33 3. Lie algebras Now we introduce the Lie algebra of an algebraic group. First, we need to do some more algebraic geometry to understand the tangent space to an algebraic variety at

More information

Information About Ellipses

Information About Ellipses Information About Ellipses David Eberly, Geometric Tools, Redmond WA 9805 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view

More information

Notes on Complex Analysis

Notes on Complex Analysis Michael Papadimitrakis Notes on Complex Analysis Department of Mathematics University of Crete Contents The complex plane.. The complex plane...................................2 Argument and polar representation.........................

More information

TARGET QUARTERLY MATHS MATERIAL

TARGET QUARTERLY MATHS MATERIAL Adyar Adambakkam Pallavaram Pammal Chromepet Now also at SELAIYUR TARGET QUARTERLY MATHS MATERIAL Achievement through HARDWORK Improvement through INNOVATION Target Centum Practising Package +2 GENERAL

More information

A representation for convex bodies

A representation for convex bodies Armenian Journal of Mathematics Volume 5, Number 1, 2013, 69 74 A representation for convex bodies R. H. Aramyan* * Russian-Armenian State University; Institute of mathematics of National Academy of Sciences

More information

arxiv: v2 [math.co] 11 Oct 2016

arxiv: v2 [math.co] 11 Oct 2016 ON SUBSEQUENCES OF QUIDDITY CYCLES AND NICHOLS ALGEBRAS arxiv:1610.043v [math.co] 11 Oct 016 M. CUNTZ Abstract. We provide a tool to obtain local descriptions of quiddity cycles. As an application, we

More information

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment

The Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State

More information

The Densest Packing of 13 Congruent Circles in a Circle

The Densest Packing of 13 Congruent Circles in a Circle Beiträge zur Algebra und Geometrie Contributions to Algebra and Geometry Volume 44 (003), No., 431-440. The Densest Packing of 13 Congruent Circles in a Circle Ferenc Fodor Feherviz u. 6, IV/13 H-000 Szentendre,

More information

THE GROUP OF UNITS OF SOME FINITE LOCAL RINGS I

THE GROUP OF UNITS OF SOME FINITE LOCAL RINGS I J Korean Math Soc 46 (009), No, pp 95 311 THE GROUP OF UNITS OF SOME FINITE LOCAL RINGS I Sung Sik Woo Abstract The purpose of this paper is to identify the group of units of finite local rings of the

More information

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS JENNY WANG Abstract. In this paper, we study field extensions obtained by polynomial rings and maximal ideals in order to determine whether solutions

More information

Part IB GEOMETRY (Lent 2016): Example Sheet 1

Part IB GEOMETRY (Lent 2016): Example Sheet 1 Part IB GEOMETRY (Lent 2016): Example Sheet 1 (a.g.kovalev@dpmms.cam.ac.uk) 1. Suppose that H is a hyperplane in Euclidean n-space R n defined by u x = c for some unit vector u and constant c. The reflection

More information

On intervals containing full sets of conjugates of algebraic integers

On intervals containing full sets of conjugates of algebraic integers ACTA ARITHMETICA XCI4 (1999) On intervals containing full sets of conjugates of algebraic integers by Artūras Dubickas (Vilnius) 1 Introduction Let α be an algebraic number with a(x α 1 ) (x α d ) as its

More information

Polynomial functions on subsets of non-commutative rings a link between ringsets and null-ideal sets

Polynomial functions on subsets of non-commutative rings a link between ringsets and null-ideal sets Polynomial functions on subsets of non-commutative rings a lin between ringsets and null-ideal sets Sophie Frisch 1, 1 Institut für Analysis und Zahlentheorie, Technische Universität Graz, Koperniusgasse

More information

DETERMINING THE HURWITZ ORBIT OF THE STANDARD GENERATORS OF A BRAID GROUP

DETERMINING THE HURWITZ ORBIT OF THE STANDARD GENERATORS OF A BRAID GROUP Yaguchi, Y. Osaka J. Math. 52 (2015), 59 70 DETERMINING THE HURWITZ ORBIT OF THE STANDARD GENERATORS OF A BRAID GROUP YOSHIRO YAGUCHI (Received January 16, 2012, revised June 18, 2013) Abstract The Hurwitz

More information

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), Institute BW/WI & Institute for Computer Science, University of Hildesheim

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), Institute BW/WI & Institute for Computer Science, University of Hildesheim Course on Information Systems 2, summer term 2010 0/29 Information Systems 2 Information Systems 2 5. Business Process Modelling I: Models Lars Schmidt-Thieme Information Systems and Machine Learning Lab

More information

Math Requirements for applicants by Innopolis University

Math Requirements for applicants by Innopolis University Math Requirements for applicants by Innopolis University Contents 1: Algebra... 2 1.1 Numbers, roots and exponents... 2 1.2 Basics of trigonometry... 2 1.3 Logarithms... 2 1.4 Transformations of expressions...

More information

THE S 1 -EQUIVARIANT COHOMOLOGY RINGS OF (n k, k) SPRINGER VARIETIES

THE S 1 -EQUIVARIANT COHOMOLOGY RINGS OF (n k, k) SPRINGER VARIETIES Horiguchi, T. Osaka J. Math. 52 (2015), 1051 1062 THE S 1 -EQUIVARIANT COHOMOLOGY RINGS OF (n k, k) SPRINGER VARIETIES TATSUYA HORIGUCHI (Received January 6, 2014, revised July 14, 2014) Abstract The main

More information

Poisson line processes. C. Lantuéjoul MinesParisTech

Poisson line processes. C. Lantuéjoul MinesParisTech Poisson line processes C. Lantuéjoul MinesParisTech christian.lantuejoul@mines-paristech.fr Bertrand paradox A problem of geometrical probability A line is thrown at random on a circle. What is the probability

More information

MATH Spring 2010 Topics per Section

MATH Spring 2010 Topics per Section MATH 101 - Spring 2010 Topics per Section Chapter 1 : These are the topics in ALEKS covered by each Section of the book. Section 1.1 : Section 1.2 : Ordering integers Plotting integers on a number line

More information

Liberal High School Lesson Plans

Liberal High School Lesson Plans Monday, 5/8/2017 Liberal High School Lesson Plans er:david A. Hoffman Class:Algebra III 5/8/2017 To 5/12/2017 Students will perform math operationsto solve rational expressions and find the domain. How

More information

Area Formulas. Linear

Area Formulas. Linear Math Vocabulary and Formulas Approximate Area Arithmetic Sequences Average Rate of Change Axis of Symmetry Base Behavior of the Graph Bell Curve Bi-annually(with Compound Interest) Binomials Boundary Lines

More information

THE ISODIAMETRIC PROBLEM AND OTHER INEQUALITIES IN THE CONSTANT CURVATURE 2-SPACES

THE ISODIAMETRIC PROBLEM AND OTHER INEQUALITIES IN THE CONSTANT CURVATURE 2-SPACES THE ISODIAMETRIC PROBLEM AND OTHER INEQUALITIES IN THE CONSTANT CURVATURE -SPACES MARÍA A HERNÁNDEZ CIFRE AND ANTONIO R MARTÍNEZ FERNÁNDEZ Abstract In this paper we prove several new inequalities for centrally

More information

ON THE GRAPH ATTACHED TO TRUNCATED BIG WITT VECTORS

ON THE GRAPH ATTACHED TO TRUNCATED BIG WITT VECTORS ON THE GRAPH ATTACHED TO TRUNCATED BIG WITT VECTORS NICHOLAS M. KATZ Warning to the reader After this paper was written, we became aware of S.D.Cohen s 1998 result [C-Graph, Theorem 1.4], which is both

More information

Common Core State Standards for Mathematics - High School

Common Core State Standards for Mathematics - High School to the Common Core State Standards for - High School I Table of Contents Number and Quantity... 1 Algebra... 1 Functions... 3 Geometry... 6 Statistics and Probability... 8 Copyright 2013 Pearson Education,

More information