Indefinite survival through backup copies

Similar documents
Infinite series, improper integrals, and Taylor series

CPSC 467b: Cryptography and Computer Security

Asymptotic Notation. such that t(n) cf(n) for all n n 0. for some positive real constant c and integer threshold n 0

Fourier Sampling & Simon s Algorithm

3.4 Introduction to power series

Variations of the Turing Machine

CPSC 467b: Cryptography and Computer Security

The universe or nothing: The heat death of the universe and it s ultimate fate

Notes on uniform convergence

Limits and Continuity

Page Max. Possible Points Total 100

Computational Complexity

We prove that the creator is infinite Turing machine or infinite Cellular-automaton.

Turing Machines and Time Complexity

04. Random Variables: Concepts

Theory of Computation

198:538 Lecture Given on February 23rd, 1998

The 123 Theorem and its extensions

εx 2 + x 1 = 0. (2) Suppose we try a regular perturbation expansion on it. Setting ε = 0 gives x 1 = 0,

Thermodynamics of computation

8.5 Taylor Polynomials and Taylor Series

1 Introduction (January 21)

REVIEW: Waves on a String

Eco504, Part II Spring 2010 C. Sims PITFALLS OF LINEAR APPROXIMATION OF STOCHASTIC MODELS

Lecture 12: Randomness Continued

BLACK HOLE. According to MATTER (Re-examined) Nainan K. Varghese,

Topics in Computer Mathematics

Lecture 5: Random numbers and Monte Carlo (Numerical Recipes, Chapter 7) Motivations for generating random numbers

Infinite series, improper integrals, and Taylor series

Testing a Hash Function using Probability

3 Continuous Random Variables

Computational Learning Theory. CS534 - Machine Learning

Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( )

Computer Sciences Department

Model Structures and Behavior Patterns

Linear Programming: Simplex

ECON2285: Mathematical Economics

Lecture 3: Reductions and Completeness

Exponential Tail Bounds

Turing Machines, diagonalization, the halting problem, reducibility

Carrying On with Infinite Decimals

Asymptotic notation : big-o and small-o

a factors The exponential 0 is a special case. If b is any nonzero real number, then

We have been going places in the car of calculus for years, but this analysis course is about how the car actually works.

Compute the Fourier transform on the first register to get x {0,1} n x 0.

On the coinductive nature of centralizers

N/4 + N/2 + N = 2N 2.

Black Holes. Jan Gutowski. King s College London

V Honors Theory of Computation

Randomized Complexity Classes; RP

INTRODUCTION TO COMPUTER METHODS FOR O.D.E.

CSE355 SUMMER 2018 LECTURES TURING MACHINES AND (UN)DECIDABILITY

Natural Hazards and Disaster. Lab 1: Risk Concept Risk Probability Questions

1 Lyapunov theory of stability

1. Theorem. (Archimedean Property) Let x be any real number. There exists a positive integer n greater than x.

Heading for death. q q

MSM120 1M1 First year mathematics for civil engineers Revision notes 4

Fire and Ice. The Fate of the Universe. Jon Thaler

Lies My Calculator and Computer Told Me

Algorithms 2/6/2018. Algorithms. Enough Mathematical Appetizers! Algorithm Examples. Algorithms. Algorithm Examples. Algorithm Examples

f(f 1 (x)) = x HOMEWORK DAY 2 Due Thursday, August 23rd Online: 6.2a: 1,2,5,7,9,13,15,16,17,20, , # 8,10,12 (graph exponentials) 2.

Computability Theory

Course 212: Academic Year Section 1: Metric Spaces

Purdue University Study Guide for MA Credit Exam

Simple Consumption / Savings Problems (based on Ljungqvist & Sargent, Ch 16, 17) Jonathan Heathcote. updated, March The household s problem X

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

Comparison of Modern Stochastic Optimization Algorithms

Homework 6. (x 3) 2 + (y 1) 2 = 25. (x 5) 2 + (y + 2) 2 = 49

Stochastic Histories. Chapter Introduction

CPSC 421: Tutorial #1

Assignment 1 Physics/ECE 176

Dark Universe II. The shape of the Universe. The fate of the Universe. Old view: Density of the Universe determines its destiny

1 Randomized complexity

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

Math 252 Fall 2002 Supplement on Euler s Method

Lecture 24: Randomized Complexity, Course Summary

Lecture # 1 - Introduction

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2018

Iterative Methods for Solving A x = b

HMMT February 2018 February 10, 2018

14.1 Earth Satellites. The path of an Earth satellite follows the curvature of the Earth.

A Few Concepts from Numerical Analysis

CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010

BIOLOGY. What exactly is biology anyway?

Dark Universe II. Prof. Lynn Cominsky Dept. of Physics and Astronomy Sonoma State University

8 Example 1: The van der Pol oscillator (Strogatz Chapter 7)

CSE332: Data Structures & Parallelism Lecture 2: Algorithm Analysis. Ruth Anderson Winter 2019

Emphasize physical realizability.

Solow Growth Model. Michael Bar. February 28, Introduction Some facts about modern growth Questions... 4

Math 320-2: Final Exam Practice Solutions Northwestern University, Winter 2015

Calculus Math 21B, Winter 2009 Final Exam: Solutions

On the Effectiveness of Symmetry Breaking

Jamming and Flow of Rod-Like Granular Materials

High School Modeling Standards

arxiv: v1 [hep-ph] 5 Sep 2017

The space complexity of a standard Turing machine. The space complexity of a nondeterministic Turing machine

6.895 Randomness and Computation March 19, Lecture Last Lecture: Boosting Weak Learners Into Strong Learners

II. Unit Speed Curves

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

16.4 Multiattribute Utility Functions

Transcription:

Indefinite survival through backup copies Anders Sandberg and Stuart Armstrong June 06 202 Abstract If an individual entity endures a fixed probability µ < of disappearing ( dying ) in a given fixed time period, then, as time approaches infinity, the probability of death approaches certainty. One approach to avoid this fate is for individuals to copy themselves into different locations; if the copies each have an independent probability of dying, then the total risk is much reduced. However, to avoid the same ultimate fate, the entity must continue copying itself to continually reduce the risk of death. In this paper, we show that to get a non-zero probability of ultimate survival, it suffices that the number of copies grows logarithmically with time. Accounting for expected copy casualties, the required rate of copying is hence bounded. Introduction We aim to live forever, but we are surrounded by risk. Many existing risks can in principle be reduced or even removed. Unfortunately we are finite beings in a probabilistic universe, and we always have a finite chance of dying due to a random fluctuation. Wait long enough and your head will spontaneously quantum-tunnel away from your body. Or your body will spontaneously implode into a black hole. There doesn t seem to be any way this can be avoided, and for any kind of finite treatment or risk-reduction technology there are clearly some random events which are beyond it. The same is true for the survival of information or other valuable structures. However, there might be a way around it if we can make backup copies. If an entity gets destroyed by something, restore them from the backup. Of course, backups are also vulnerable to random destruction. But if we distribute the copies widely so that their fate is independent of each other and the original (in practice a very hard problem, see below) and whenever one gets destroyed it is replaced by a copy of a surviving copy 2, then the probability of all N copies being destroyed is µ N, where µ > 0 is the probability (per unit of time) of one copy being destroyed. Typically µ N µ even for modest N. At least some of us. 2 We also assume copying is much faster than the inter-destruction time.

2 Is it possible to survive indefinitely? Could making backup copies actually make indefinite survival possible? The probability of surviving forever is P = ( µ N(t) ) t= Where N(t) is the number of copies at time t. P is obviously bounded from above by the smallest term in the product. If N(t) is constant for all t > T for some finite T, then the tail of constant terms will force the product to converge to zero (since each term is strictly between 0 and ). The product t= ( + a t) converges if the t= a t converges uniformly 3. Hence if t= µn(t) converges indefinite survival is possible. Taking N(t) = t produces the usual convergent (since 0 < µ < ) geometric series. So indefinite survival is possible with linear growth of backups. 3 Lower bounds What is the slowest growth of N(t) that gives indefinite survival? There is no true lower bound, since for any N(t) that produces a nonzero P we can produce a more slowly increasing sequence Ñ(t) = {N(), N(2),... N(i), N(i),...} by repeating a particular term N(i), reducing the survival probability by a finite but nonzero factor ( µ N(i) ). However, it turns out that a logarithmic growth suffices (and not all logarithmic growth rates are enough, giving a kind of lower bound): Proposition 3. If N(t) = log a (t) for < a < /µ, then t= µn(t) converges. Proof. Let f(t) = µ log a (t). Then log a f(t) = log a (t) log a (µ) = log a (t log a (µ) ). Hence f(t) = t log a (µ). Whenever < a < /µ, f(t) = t b for some b <. Hence f(t)dt exists and is finite. Then by the integral test, f(x) = µn(t) converges. Of course, N(t) is an integer while log a (t) is a real number but replacing log a (t) with the integer part log a (t) (or any log a (t)+g(t) for g(t) some bounded function of t) does not change the result. Notice that if a is larger than /µ, then the series will diverge. So though logarithmic growth in the number of copies can ensure a non-zero probability indefinite survival, there are logarithmic growth rates that fail to do so. 3 For a proof, see J. L. Taylor,Complex Variables, AMS Pure and Applied Undergraduate Texts vol. 6, 20. p. 235. 2

So, in practice, how many extra copies will we need to construct in order to ensure that we have a logarithmic growth in the total number of copies at each time step? On average a proportion µ of copies will be destroyed at each time step. If R is the rate of copying, we need to ensure that Rµ log a (n) = log a (n+), or R = log a(n + ) µ log a (n). Thus the rate of copying needed is bounded above (and decreasing with time towards the replacement rate of /µ). 4 Time-varying risk If the risk changes over time the same analysis holds if time is reparametrized so the risk per unit (reparametrised) time is constant. For example, given µ(t) τ(t) = t µ(u)du produces µ(τ) = C and the same analysis applies: the number of copies need to grow logarithmically with τ. As an example, consider a situation where the risk decreases as µ(t) = kt α (α 0). In this case τ(t) = C kt α /( α) and N(t) would need to grow as N(t) ( α) log(t) or faster for α <. For α > the total risk is bounded (τ converges to C), and there is no need for copying to ensure a chance of indefinite survival. For α = the total risk is unbounded but making a single copy at any finite t is enough to produce a finite survival probability. 5 Non-independent risks If the risks to the different copies are correlated to some extent (for instance if they all live together around an unstable star) or depend on the population size (for instance, if large collections of backups become tempting targets to terrorists), then the above equations no longer hold. If the correlated risk to the whole population never diminishes towards zero, then indefinite survival is of course impossible. But under the assumptions that the risk of death tends to zero as the number of copies increaes, we can find a function M(t), such that the partially correlated risk, given that there are M(t) copies, is less than or equal to the uncorrelated risk given N(t) copies. Therefore there is a function f defined such that f(n(t), t) = M(t). If we assume the relationship between M and N is the same at any point in time, then this becomes a function f(n(t)) = M(t). Since we know that N(t) must grow logarithmically, this f gives us the necessary growth rate for M(t). If for instance f is quadratic, then M(t) need only grow like the square of a logarithm still extremely slowly. So even in the case where we d need to square the number of copies to reduce the correlated risk to the same level as the uncorrelated risk, the growth rate needed to ensure a chance at indefinite survival is very low. The same hold for other polynomial rates of increase. It s only if we need to have exponentially more copies in the correlated case to reduce 3

the risk down to the uncorrelated level, that M(t) starts growing at appreciable (linear) rates. Even there, the rate of copying R = M(t + )/(µm(t)) is still bounded and decreasing towards /µ. Only if f grows as the exponential of an exponential (meaning that M(t) grows exponentially) will we face the need for rates of copying R that do not decrease towards /µ; only if f grows even faster do we need to have an unbounded R. A toy example: if a fraction r of the population of backup copies have perfectly correlated risk outcomes and the rest have uncorrelated outcomes, then the risk of extinction will be µ +( r)m(t). This will be smaller than the µ N (t) extinction risk in an uncorrelated population if M(t) > (N(t) )/( r). Hence f is linear in this case, merely requiring logarithmic growth. 6 Common-mode risks In reality it is likely that common-mode risks, extinction risks that affect all copies identically, are going to dominate over the extinction risk due to individual simultaneous bad luck. Common-mode risks can be exogenous or endogenous. Examples of endogenous risks are errors in the system detecting a destroyed copy, or errors in the system finding and replacing them with a functional copy. These endogenous risks can obviously be handled by having multiple independent copy management systems, at the price of some extra complexity. These systems can themselves be viewed as copies subject to random accidents: the above analysis can be applied to them, suggesting that the number of independent copy management systems should also grow with the logarithm of time, with a meta-system replacing faulty systems and their copies. Continuing this, the overall system would be a tree-like system growing very slowly in height, occasionally replacing broken leaves (copies) or branches (copies and their copy-maintenance systems). In practice this scheme will always be limited by exogenous common-mode risks, and potential endogenous risks from the implementation of the architecture. Beyond a certain point these risks will dominate over the now minuscule risk of all copies getting randomly deleted, and there is no further advantage in expanding the scheme. 7 Upper bounds There is no theoretical upper bound on how many copies can be made: clearly any exponential or superexponential growth rate is possible mathematically. Restricting the model to a one-tape Turing machine, making a copy of a given string on the tape takes constant time (quadratic in the length of the string) and there is just a single thread of copying. This means that the number of copies can maximally grow linearly with time. Hence, in a Turing-machine 4

world, if the deletion rate is not so great that deletion detection or reliable copying becomes impossible, indefinite survival is possible. In the physical universe parallel copying is possible. However, the fastest N(t) can grow physically is cubically, since the lightspeed limit keeps us within a growing sphere with radius ct and each copy requires some finite mass-energy. N(t) is also limited above by the accelerating expansion of the universe, since eventually we can t outrun remote galaxies to use as material for backup copies and N(t) cannot grow any more. More seriously, beyond a certain time the universe might be unable to sustain life (or backups) due to the instability of matter able to store information, the ultimate common-mode risk. However, for the forseeable future a sensible backup policy might guarantee a good chance of survival. As we have shown, a drastic reduction in long-term risk does not necessarily require enormous resources. 5