Iterated Function Systems

Similar documents
Fractals. R. J. Renka 11/14/2016. Department of Computer Science & Engineering University of North Texas. R. J. Renka Fractals

Fractals and Dimension

2 Functions. 2.1 What is a function?

Section 14.1 Vector Functions and Space Curves

MATH Max-min Theory Fall 2016

Homework 5. Solutions

USING THE RANDOM ITERATION ALGORITHM TO CREATE FRACTALS

NOTES ON BARNSLEY FERN

Take a line segment of length one unit and divide it into N equal old length. Take a square (dimension 2) of area one square unit and divide

Math 421, Homework #9 Solutions

Topological properties of Z p and Q p and Euclidean models

RECURRENT ITERATED FUNCTION SYSTEMS

Continuity. Chapter 4

The Hausdorff Measure of the Attractor of an Iterated Function System with Parameter

FINAL PROJECT TOPICS MATH 399, SPRING αx 0 x < α(1 x)

Maple Output Maple Plot 2D Math 2D Output

Fractals and iteration function systems II Complex systems simulation

Iterated function systems on functions of bounded variation

Construction of Smooth Fractal Surfaces Using Hermite Fractal Interpolation Functions. P. Bouboulis, L. Dalla and M. Kostaki-Kosta

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

Vector Calculus, Maths II

arxiv: v1 [math.ds] 31 Jul 2018

Lecture 8: A Crash Course in Linear Algebra

REAL ANALYSIS II: PROBLEM SET 2

Random Variable. Pr(X = a) = Pr(s)

Self-similar Fractals: Projections, Sections and Percolation

Math 341: Convex Geometry. Xi Chen

Self-Referentiality, Fractels, and Applications

Examensarbete. Fractals and Computer Graphics. Meritxell Joanpere Salvadó

Fractals. Krzysztof Gdawiec.

x 1. x n i + x 2 j (x 1, x 2, x 3 ) = x 1 j + x 3

Walker Ray Econ 204 Problem Set 3 Suggested Solutions August 6, 2015

The Fundamental Group and Covering Spaces

Key Point. The nth order linear homogeneous equation with constant coefficients

Fractals and Linear Algebra. MAA Indiana Spring Meeting 2004

MAT100 OVERVIEW OF CONTENTS AND SAMPLE PROBLEMS

Analysis of Fractals, Image Compression and Entropy Encoding

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

Math 361: Homework 1 Solutions

Math 320-2: Final Exam Practice Solutions Northwestern University, Winter 2015

A PLANAR INTEGRAL SELF-AFFINE TILE WITH CANTOR SET INTERSECTIONS WITH ITS NEIGHBORS

GEOMETRY AND VECTORS

The Order in Chaos. Jeremy Batterson. May 13, 2013

Solutions to 2015 Entrance Examination for BSc Programmes at CMI. Part A Solutions

In addition to the problems below, here are appropriate study problems from the Miscellaneous Exercises for Chapter 6, page 410: Problems

7 Random samples and sampling distributions

ON THE DIAMETER OF THE ATTRACTOR OF AN IFS Serge Dubuc Raouf Hamzaoui Abstract We investigate methods for the evaluation of the diameter of the attrac

Supplementary Notes for W. Rudin: Principles of Mathematical Analysis

Introduction to Randomized Algorithms: Quick Sort and Quick Selection

15-251: Great Theoretical Ideas In Computer Science Recitation 9 : Randomized Algorithms and Communication Complexity Solutions

Introduction. Chapter Points, Vectors and Coordinate Systems

Math 106 Exam 2 Topics. du dx

MTHE 227 Problem Set 2 Solutions

arxiv: v1 [math.ds] 17 Mar 2010

MITOCW ocw f99-lec01_300k

Language-Restricted Iterated Function Systems, Koch Constructions, and L-systems

Chapter 8: An Introduction to Probability and Statistics

Measure and Integration: Solutions of CW2

Expectations over Fractal Sets

18.02 Multivariable Calculus Fall 2007

HOMEWORK ASSIGNMENT 6

MITOCW watch?v=vjzv6wjttnc

Slope Fields and Differential Equations. Copyright Cengage Learning. All rights reserved.

Solving Differential Equations: First Steps

MTH301 Calculus II Glossary For Final Term Exam Preparation

VECTORS. Given two vectors! and! we can express the law of vector addition geometrically. + = Fig. 1 Geometrical definition of vector addition

Note: Please use the actual date you accessed this material in your citation.

Solutions to November 2006 Problems

Math 651 Introduction to Numerical Analysis I Fall SOLUTIONS: Homework Set 1

Lesson 6. Vector Fields Acting on a Curve

High School Math Contest

Fractals list of fractals - Hausdorff dimension

CSC236 Week 10. Larry Zhang

Chapter 4: An Introduction to Probability and Statistics

Lagrange Murderpliers Done Correctly

MITOCW ocw-18_02-f07-lec02_220k

An Introduction to Self Similar Structures

Furman University Wylie Mathematics Tournament Junior Examination

Walk through Combinatorics: Sumset inequalities.

Exercises for Unit VI (Infinite constructions in set theory)

1 Review of Probability

MITOCW ocw-18_02-f07-lec17_220k

Conditioning a random variable on an event

The Lebesgue Integral

MORE EXERCISES FOR SECTIONS II.1 AND II.2. There are drawings on the next two pages to accompany the starred ( ) exercises.

Lecture 2: Isometries of R 2

CSCI 2150 Intro to State Machines

Hausdorff Measure. Jimmy Briggs and Tim Tyree. December 3, 2016

Self-Simi lar Structure in Hilbert's Space-Fi l 1 ing Curve

Continuous Distributions

Algebra II. Algebra II Higher Mathematics Courses 77

Introduction to Real Analysis Alternative Chapter 1

Transformations. Chapter D Transformations Translation

Name: ID: Math 233 Exam 1. Page 1

ITERATED FUNCTION SYSTEMS WITH A GIVEN CONTINUOUS STATIONARY DISTRIBUTION

Continuity. Chapter 4

Section 1.8/1.9. Linear Transformations

Linear Cyclic Codes. Polynomial Word 1 + x + x x 4 + x 5 + x x + x

MAT 311 Midterm #1 Show your work! 1. The existence and uniqueness theorem says that, given a point (x 0, y 0 ) the ODE. y = (1 x 2 y 2 ) 1/3

Lecture 2 : CS6205 Advanced Modeling and Simulation

Transcription:

1 Iterated Function Systems Section 1. Iterated Function Systems Definition. A transformation f:r m! R m is a contraction (or is a contraction map or is contractive) if there is a constant s with 0"s<1 such that f(x)-f(y) " s x-y for all x and y in R m. We call any such number s a contractivity factor for f. The smallest s that works is called the best contractivity factor for f. Example. Let f:r! R by f(x) = 0.9x + 2. Then f is a contraction mapping because f(x)- f(y) = 0.9x - 0.9y = 0.9 x-y. We may thus choose s = 0.9. [For that matter we could also choose s = 0.95 but not s = 1.05 or s = 0.85. We generally pick the smallest possible choice for s, which is 0.9 in this example. Thus s = 0.95 would not get full credit.] Lemma. Let f: R 2! R 2 be a similarity, f(x) = r x + t with r > 0. This combines a rescaling by r with a translation by the vector t = (t1, t2). Then f is a contraction provided r < 1. When this occurs, the best contractivity factor is r. Proof. f(x) - f(y) = r x+t -( r y+t) = r x - r y = r x - y. This gives lots of examples of contractions on R 2. Theorem. Suppose f: R! R is differentiable everywhere and f '(x) " s < 1 for all x # R. Then f is a contraction with contractivity factor s. Example. f(x) = 0.5 cos x is a contraction on R. Proof. By the Mean Value Theorem, f(x) - f(y) / x-y = f '(c) " s for some c between x and y. But then f(x) - f(y) " s x-y. Definition. A (hyperbolic) iterated function system (IFS) on R m is a finite set of contraction mappings w1:r m! R m, w2:r m! R m,..., wn: R m! R m. Example. On R, w1(x) = (1/3)x, w2(x) = (1/3)x+(2/3). This is an IFS. Def. Given an IFS, let H(R m ) be the collection of compact nonempty subsets of R m. More explicitly, these are the subsets of R m which are bounded, closed, nonempty. We define the set transformation W: H(R m )! H(R m ) by W(B) = w1(b) $ w2(b) $...$ wn(b). If for each i, wi is a contraction map with contractivity factor si, let s = max {s1, s2,..., sn}. We say that the contractivity factor of W is s. Example. On R, w1(x) = (1/3)x, w2(x) = (1/3)x+(2/3). Then W(B) = w1(b) $ w2(b) has contractivity factor (1/3). Definition. If A # H(R m ) for a hyperbolic IFS is fixed under W (ie., W(A) = A), then A is called the attractor of the IFS.

2 Example. On R, w1(x) = (1/3)x, w2(x) = (1/3)x+(2/3). The attractor is the Cantor set K, since W(K) = K. Typically the attractor is a fractal. Contraction Mapping Theorem. Let w1,..., wn be a hyperbolic IFS on R m with contractivity factor s. Then the IFS possesses exactly one attractor A. Suppose B consists of a fixed point p of some wi (ie, B = {p}). Then A consists of all the limits of all converging sequences x0, x1, x2,... such that for all i, xi lies in W i (B). In fact, if B is any member of H(R m ) then the sequence {W n (B)} converges to A. [The definition of what it means for a sequence of sets to converge to a set is complicated.] Section 2. Drawing the attractor of an IFS METHOD 1: THE "DETERMINISTIC ALGORITHM". Suppose we are given a hyperbolic IFS with set transformation W. Choose an input drawing B. Draw B, W(B), W 2 (B),... W n (B) until you are satisfied. See the applet http://classes.yale.edu/fractals/software/detifs.html on the course web page, listed as "Deterministic Iterated Function Systems". This is based on the formulas for the Sierpinski triangle: Let w1(x) = (1/2)x, w2(x) = (1/2)x + (0,1/2), w3(x) = (1/2)x + (1/2,0). Then W has contractivity factor 1/2. The attractor A satisfies W(A) = A and the successive pictures converge to A. By choosing the squiggle on the left, we can draw a different input drawing. Then hit the play arrow to draw the resulting figures W^n(B). The use of the small arrow performs just one step. Rather than an arbitrary input drawing B, it is useful to choose any fixed point p of some wi. Let A0 = {p} # H(R 2 ). Draw A0. Draw A1 = W(A0). Draw A2 = W(A1). In general, we let Aj+1 = W (Aj) and we draw Aj's until we are satisfied. Example. Consider again the Cantor set example. Let w1(x) = (1/3)x, w2(x) = (1/3)x+(2/3). Choose B = {0} and consider W n (B). B = {0} W(B) = {0,2/3} W 2 (B) = {0, 2/9, 2/3, 8/9} W 3 (B) = {0,2/27,6/27,8/27,18/27,20/27,24/27,26/27}. In practice this means that there is a finite set A0 corresponding to pixels that are darkened. We draw A0. For example, if A0 consists of a single point p, then A0 = {p}, A1= {w1(p),..., wn(p)}, A2 = {wi(wj(p))}. In practice you make a linked list of the points to be drawn. This does not use the probabilities. Why does this work? Lemma. A fixed point p of w1 lies in A. Proof. Let B = {p}. Then x0 = p # B

3 x1 = w1(p) = p # W(B) x2 = w1(x1) = p # W 2 (B) Thus the sequence xi = p satisfies that xi # W i (B), whence p is a limit point. Hence p # A. Cor. For each n, W n (B) % A. Proof. (1) For each i, wi(p) # wi(a) % W(A) = A. Hence W(B) % A. (2) Each point of W 2 (B) has the form wi(q) for q # W(B). But q # A. Hence wi(q) # wi(a) % W(A) = A. Hence W 2 (B) % A. (3) The general case follows by induction. By the Contraction Mapping Theorem, A is the set of limit points from $ W n (B). On the computer you can't distinguish limit points. Hence the method draws a picture indistinguishable on the computer from A. Method 1 leads to lots of redundancy since the same calculations are performed lots of times. METHOD 2: THE "RANDOM ITERATION METHOD" Give each map wi a probability pi with 0<pi<1 but & pi = 1. Let x0 be a point in A (for example, the fixed point of w1). Draw x0. Pick a map wi at random (so wi is chosen with probability pi). Let x1 = wi(x0) and draw x1. Pick a map wj at random and let x2 = wj(x1); draw x2. Repeat this. This goes very fast. You don't need to draw many extraneous points. You don't have the overhead of keeping a long linked list. Use the Applet http://classes.yale.edu/fractals/software/randomifs.html. On the web page this is listed as "Random Iterated Function Systems". w1(x,y) =.5(x,y) w2(x,y) =.5 (x,y) + (0.5, 0) w3(x,y) =.5 (x,y) + (0. 0.5) For the Sierpinski triangle, we get good results if p1 = 0.33, p2 = 0.33, p3 = 0.34. Note the weird results, however, if p1 = 0.66, p2 = 0.30, p3 = 0.04. This means that the second and third maps are rarely used, so the detail doesn't fill in. Since p3 is small, it is rare that we draw points in w3(a). Frequently one sees the description to let x0 be any point (not necessarily a point of A); but then one gets the additional complication of omitting the first few (maybe 10) points from the drawing since they will not be in A. (After a while, the points will be so close to A that it will not matter for the picture whether the point was actually in A; telling how many iterates to wait may be a bit complicated sometimes.) Example. Consider again the Cantor set example. Let w1(x) = (1/3)x, w2(x) = (1/3)x+(2/3). Let p1 = 1/2, p2 = 1/2. Suppose instead we start with 0. Repeatedly flip a coin, and use w1 if Heads, w2 if Tails. The points we obtain are all in A and can get close to each point in A.

4 Why does the method work? Lemma. Each xi lies in A. Proof. x0 # A by definition. x1 = wj(x0) for some j, hence lies in W(A) = A, hence x1 # A. x2 = wj(x1) for some j, hence lies in W(A) = A, hence x2 # A. etc. The convergence is a matter of probability (true only with high probability), but in any event every point drawn is in A. Rule of Thumb for choosing the probabilities pi: Make pi proportional to the estimated area of wi(a). Thus for the Sierpinski triangle, the 3 pieces wi(a) should be the same size, so they should have the same probabilites. Hence pi = 1/3. Section 3. The Collage Theorem. There remains the important problem, given a proposed attractor A, of finding an IFS that has A as its attractor. This is accomplished by the Collage Theorem: Let B # H(R m ). Let ' > 0. B+' = {x # R m : there exists b # B with x - b < ' } = B fattened by ' Example. B = line segment Example. B = edge of a square Collage Theorem. Let L # H(R m ). [This is what you hope to be the attractor.] Let ' ( 0 be given. [This is the allowed resolution.] Suppose [by hook or crook] there is an IFS w1, w2,..., wn with contractivity factor s, 0"s<1, such that W(L) % (L + ') L % (W(L) + ') Then the attractor A of the IFS satisfies: (1) A % (L + '/(1-s) ) (2) L % (A + '/(1-s) ) Hence, if ' is small, A looks very much like L; A is contained in a fattened version of L and L is contained in a fattened version of A. Example 1. Let L be the Sierpinski triangle. (Use overhead.) Let w1(x) = (1/2)x, w2(x) = (1/2)x + (0,1/2), w3(x) = (1/2)x + (1/2,1/2). Then A = L. Example. Tell the maps for a hyperbolic IFS that draws the fractal below. Assume the bottom is [0,1].

5 Answer: w1(x) = (1/3) x w2(x) = (1/3)x + (1/3,0) w3(x) = (1/3)x + (2/3,0) w4(x) = (1/3) x + (0, 1/3) w5(x) = (1/3)x + (0, 2/3) Section 4. Transformations on R2 Some types of transformations are used so much that they have names. A map w: R m! R m by w(x) = x + t where t = is a fixed vector. The effect of w is to move the vector over by t. We say w translates by t. A map w: R m! R m by w(x) = r x rescales by the scale factor r. A map w: R m! R m by w(x) = rx + t first rescales, then translates by t. A map w: R 2! R 2 by w(x1,x2) = (x1, -x2). We say that w reflects in the x1 axis. We can rewrite w using matrices: Let R denote the matrix corresponding to reflection in the x1 axis: R = [1 0] [0-1] Then w(x) = R x using matrix multiplication.

6 A map w: R 2! R 2 by w(x) = [cos ) -sin ) ] x = R) x [sin ) cos ) ] w corresponds to counterclockwise rotation about the origin by the angle ). Here R) denotes the matrix R) = [cos ) -sin ) ] [sin ) cos ) ] Ex. Rotate by 45 degrees. Show the unit square so rotated. Ex. To rotate clockwise 45 degrees, instead you rotate counterclockwise by -45 degrees. A similitude on R 2 is a transformation w: R 2! R 2 that has either of the forms: w(x) = r R) (x) + v or w(x) = r R) R (x) + v where we shall insist that the constant r satisfies r >0. These formulas translate explicitly into the matrix forms: w(x) = r cos ) -r sin ) x1 + e r sin ) r cos ) x2 f or w(x) = r cos ) r sin ) x1 + e r sin ) -r cos ) x2 f In the first, this says take x, rotate by ), then rescale by r, then translate by v. The second says take x, reflect in the x axis, rotate by ), rescale by r, then finally translate by v. Example. Find the formula for the similitude that first rotates 45 degrees and then translates by (1,2)T. Solution. f(x) = [*2/2 -*2/2 ] x [*2/2 *2/2 ] and g(x) = x + (1,2). Hence what we want is g(f(x)) = [*2/2 -*2/2] x + [1] [*2/2 *2/2] [2]