Logic and cryptography
|
|
- Alexandra Melton
- 5 years ago
- Views:
Transcription
1 Charles University in Prague Faculty of Mathematics and Physics MASTER THESIS Bc. Vojtěch Wagner Logic and cryptography Department of Algebra Supervisor of the master thesis: prof. RNDr. Jan Krajíček, DrSc. Study programme: Mathematics Specialization: Mathematical Methods of Information Security Prague 015
2 I would like to thank the supervisor of my master thesis, prof. RNDr. Jan Krajíček, DrSc. for interesting topic, for lot of patience, for willingness to help and advise and for his valuable advices provided during this work.
3 I declare that I carried out this master thesis independently, and only with the cited sources, literature and other professional sources. I understand that my work relates to the rights and obligations under the Act No. 11/000 Coll., the Copyright Act, as amended, in particular the fact that the Charles University in Prague has the right to conclude a license agreement on the use of this work as a school work pursuant to Section 60 paragraph 1 of the Copyright Act. In Prague date Bc. Vojtěch Wagner
4 Název práce: Logika a kryptografie Autor: Bc. Vojtěch Wagner Katedra: Katedra algebry Vedoucí diplomové práce: prof. RNDr. Jan Krajíček, DrSc. Abstrakt: Práce se zabývá studiem metod pro formalizaci kryptografických konstrukcí. Konkrétně metodou, která je založena na definování logické teorie T, která obsahuje řetězce, čísla a objekty třídy k - k-ární funkce. Povolíme jim určité operace a formulujeme axiomy, termy a formule. Budeme používat speciální typ termů - počítající term, který označuje počet prvků x v daném intervalu splňujících formuli ϕ(x). Díky nim můžeme mluvit o pravděpodobnostech a používat další pojmy z teorie pravděpodobnosti. Práce nejprve popisuje detailně tuto teorii. Poté přináší formalizaci Goldreich-Levinovy věty. Cílem práce je předložit potřebné kryptografické pojmy a konstrukce v jazyce teorie T a následně dokázat větu pomocí objektů, pravidel a axiomů teorie T. Uvedené definice a principy jsou ilustrovány na příkladech. Cílem práce je ukázat, že takováto teorie je dostatečně silná, aby dokázala správnost a bezpečnost podobné kryptografické konstrukce. Klíčová slova: kryptografie, ověřování protokolů, věta o správnosti, formální logická teorie, Goldreich-Levinova věta Title: Logic and cryptography Author: Bc.Vojtěch Wagner Department: Department of Algebra Supervisor: prof. RNDr. Jan Krajíček, DrSc. Abstract: This work is devoted to a study of a formal method of formalization of cryptographic constructions. It is based on defining a multi-sorted formal logic theory T composed of strings, integers and objects of sort k - k-ary functions. We allow some operations on them, formulate axioms, terms and formulas. We also have a special type of integers called the counting integers. It denotes the number of x from a given interval satisfying formula ϕ(x). It allows us to talk about probabilities and use terms of probability theory. The work first describes this theory and then it brings a formalization of the Goldreich-Levin theorem. The goal of this work is to adapt all needed cryptographic terms into the language of T and then prove the theorem using objects, rules and axioms of T. Presented definitions and principles are ilustrated on examples. The purpose of this work is to show that such theory is sufficiently strong to prove such cryptographic constructions and verify its correctness and security. Keywords: cryptography, protocol verifying, Soundness theorem, formal logic theory, the Goldreich-Levin theorem
5 Contents Introduction 1 Formal system for cryptographic protocols verification General introduction Elements of the system The language Terms and formulas Axioms of the theory Properties of counting terms Soundness of the theory The Goldreich-Levin theorem 15.1 Hard-Core predicate The Goldreich-Levin theorem Adaptation in our theory Cryptographic terms Inner product The theorem Proof of the Goldreich-Levin theorem Idea of the original proof Idea of the formalized proof Steps of the proof Formalization of ϕ Formalization of ϕ Formalization of ψ Construction of the Goldreich-Levin algorithm Preparation The algorithm The correctness and the success probability of the algorithm Final step Conclusion 36 Bibliography 37 Appendix 38 1
6 Introduction With the masive expansion of interchange network protocols, big data storages and ICT technologies in general, there comes also a question regarding their security and correctness. Because of huge and continuously growing technological possibilities and opportunities, it is not sufficient to just provide some service but more and more attention goes to the quality of the provided service. Nowadays many companies have their data stored in some kind of ICT services. That immediately implies the question regarding correctness, security, integrity and availability of those services to protect and correctly store the data. In general, we deal with two things. One is the construction of a correct and a secure protocol. Although this is the main task in cryptography, we will not deal with this task in this work. In fact, the construction of a secure protocol is just the first part. We also need the possibility to verify that our constructed protocol is correct and secure enough. To do that, we can define a formal logic system. We allow as objects just strings and integers that are the length of some string. We allow specific operations on them and present some axioms. To be able to verify protocols, we present functions, terms and formulas and special terms that give us a number of elements from a given interval satisfying some formula. We will call them counting integers and having them, we can talk about probablities. Such theory was introduced by Russel Impagliazzo and Bruce Kapron in [IK03]. The idea is to present formal logic system that can be easily implemented for automated protocol verifying. The goal of this work is to show how this theory can be used for proving some cryptologic construction. Namely we will try to formalize and prove the wellknown Goldreich-Levin theorem using this logic theory. The first chapter is devoted to a description of this formal system of logic. We define objects we will work with - strings and integers and a special way how to view string as integers and vice versa. We present operations on them and define axioms, terms and formulas. We allow functions of arity k > 0. This will give us a strong enough tool to present the Soundness theorem - a general way how to prove cryptographic properties. We present this theorem at the end of the chapter. This chapter is closely based on the original work [IK03]. Besides that, we show the presented principles on some examples. In the second chapter we present a formalization of the Goldreich-Levin theorem in our theory. We show how to transform the cryptographic primitives that are used in the theorem into our formal system of logic and finally we transform the theorem itself to our theory. In the third chapter we try to prove the adapted Goldreich-Levin theorem using objects, axioms and rules from the defined theory. By giving the exact proof of the theorem, we will show that the theory is strong enough to prove such cryptographic constructions. This proof uses ideas from the original proof (as presented for example in [G01b]), but the whole composition and transformation into the language of the theory is the result of my work.
7 1. Formal system for cryptographic protocols verification 1.1 General introduction To create a secure and a correct protocol which solves one or more tasks regarding privacy of data, integrity of data, availability of data or authentication is one of the most fundamental and important tasks in cryptography. But it is also one of the most difficult ones. A simple straightforward solution comes to our mind immediately. We could take cryptographic primitives which are considered secure and correct and put them simply together - let every single task be solved by an appropriate primitive. Although every particular primitive is secure and correct, the resulting protocol may not be secure and correct. In fact, with a high probability such protocol will not be secure and correct unless we took care of the right connection of primitives together and a correct usage of them. This makes from a cryptographic protocol construction very difficult task. Naturally there arises a question how to correctly verify that some protocol is sound and secure enough. To have a possibility to verify protocols generally enough and independently on all factors, we need some formal system of axioms, rules, variables and functions in which we can interpret the whole protocol and prove the soundness easily by using axioms from this system. Such system was proposed by Russel Impagliazzo and Bruce Kapron and we will describe it in detail in the first chapter. The description is closely based on work of R. Impagliazzo and B. Kapron presented in [IK03]. We then present some examples for better understanding. At first, we will give a description of this system and at the end of this chapter we will also show that this system allows us to verify cryptographical constructions within the Soundness theorem. 1. Elements of the system We consider a formal logic system which uses syntax of first order logic. It means we have elements of propositional logic (i.e. syntax, derivation rules and axioms) and also quantifiers and predicates. We also have the use of function symbols. Having all this, we get strong enough tool to talk about probabilities, polynomial functions and asymptotical properties. For complete introduction of the system, we have to define axioms, objects of the system and rules which we will use. Let us start with the primary objects of the system, i.e. objects that system takes as basic. 3
8 Security parameter. Let us suppose that we have some property which is asymptotically true. For example, we can look for the number of primes less than or equal to some value x. As we know, this number is asymptotically equal to Π(x). But it is clear that taking some particular number x, the formula x log x may not holds exactly. But it always holds that lim n Π(n) log n n = 1. Considering the situation that our goal is to create a tool which could be used for automatic protocols verifying, we surely need to have a value such that all asymptotically true statements are true at this value. Let us fix a variable name n for such value and call it the security parameter. Next, denote s a string such that s =n (by x we mean the length of x in a specific representation which will be described later). Strings. Next objects, that we will work with, are strings. I.e. constant s, inputs and outputs from (probabilistic) algorithms, outputs from functions applied to strings. We denote ɛ the empty string. Other string variables we denote by letters a, b, c,.... Integers. The next category of objects, which we will use in our system, are integers. We consider only those which are the length of some string. We denote t, meaning length of a string t. So we have numbers n for s (the security parameter), 0 for string ɛ, 1 for strings 0, 1 etc. Using the dyadic representation we can also view strings as numbers and vice versa. The dyadic notation can be created as follows. We assign 0 to the empty string ɛ, 1 to a string 0, to a string 1, 3 to a string in the form of 00, 4 to a string 01 etc. In general, having some string b 1 b b 3... b k, we prepend 1, i.e. we have string 1b 1 b... b k. String 1b 1 b... b k represents some number m using standard binary notation. We get appropriate dyadic number by taking value m 1. This procedure can also be inverted to get an appropriate dyadic string from a number. Having some integer m, we write down a binary representation of m + 1 and to get the dyadic notation we substract the leading 1. Having this representation, we can naturally use standard arithmetical operations on strings - we can add, substract or compare them. For example having x = 10011, y = 0110 we know that x corresponds to number 50 and y corresponds to. So x + y = 7 which is in dyadic, y x etc. Functions. We also need objects of the first order logic representing k-ary functions applied to variables x 1, x,..., x k. For these functions we require to be polynomial time computable in the secutiry parameter n to take strings as inputs to output strings For each such function we have a function symbol, i.e. variable, which represents such function. That allows to quantify - we quantify over these function variables. Using function variables we will be able to denote adversaries to a cryptosystem or to some security property. 4
9 We also find usefull next properties - a composition of functions, an application of arithmetical operations. We define the composition of functions using a special type of recursion. This will be described later. Counting terms. In cryptography we often use terms of probality theory and their properties. To have the possibility to use such terms, we need one more type of objects, the so-called counting terms. A counting term denotes integer determining the number of elements of some set of strings. By the statement # ( x 1 = m 1 x = m x k = m k ) ϕ(x 1,..., x k ) we denote the number of vectors x = (x 1,..., x k ) (with lengths of coordinates m 1,..., m k ) satisfying formula ϕ(x 1,..., x k ). The exact calculation of this value may not be trivial, even it may not be polynomial time computable. Because of that, we suppose that a counting integer is not an input to functions. And how can we use this term for probabilities? As we know, according to the classic definition of probability, we calculate the probability of some event as the number of cases favorable for the event divided by the total number of all possible cases of event. As we said, calculation of both values may not be trivial at all. But counting integers belong to the primary objects of our system, therefore the fraction #( x =t)ϕ(x) (which represents the probability that x satisfies ϕ(x)) is #( x =t)(x=x) a permitted calculation. 1.3 The language Constants. We assume the existence of the following constants: 0 corresponds to an empty string ɛ 1 corresponds to a string 0 in the dyadic notation corresponds to a string 1 in the dyadic notation n N is the security parameter string s {0, 1} such that s =n Variables. In the system we have two sorts of variables. We have function variables, representing k-ary functions. We will usually use symbols f k, g k, h k,... for function variables. The second sort of variables is for integers and strings. As we said, using dyadic notation we can represent integers by strings and vice versa. So we can use just one variable for a couple string integer. We will usually use symbols such x, y, z, i, j, k,... for integers/strings. Operations. We allow the following operations in our theory. The addition x + y defined for integers. Having the unique correspondation between integers and strings, we can also view this operation as an operation on strings. The multiplication x y, similarly as in the previous case we can view this operation as multiplying of strings. 5
10 The length function x representing the length of string x. The smash function x y defined as x y. x y, where x y = max{x y, 0}. The operations x y, x = y defined for integers. Again, we can view this as an operation on strings. A k p for k 1 stands for an application of k-ary function. Formally we denote it as A k p(f k, t 1,..., t k ) meaning the aplication of k-ary function f k to an inputs t 1,..., t k. Mostly we will use more usual notation f(t 1,..., t k ). 1.4 Terms and formulas Now let us describe terms and formulas, which we will use later. We define terms from objects of the theory using operations defined in the previous pararaph. Using operations from first-order logic we then define also formulas in our theory. Terms. 0, 1,, s, n are terms. Every variable representing string is a term. Let t 1,..., t k are terms, f k a function variable. Then Ap k (f k, t 1,..., t k ) is a term. Let r, s are two terms. Then r + s, r s, r s, r, r s are terms. Formulas. We define formulas from terms. Let r, s are terms. Then r s, r = s are formulas. If ϕ, ψ are formulas, then ϕ, ϕ ψ, ϕ ψ, ϕ ψ, ϕ ψ are formulas. If ϕ is a formula and x is a string variable, then xϕ a xϕ are formulas. If ϕ is a formula and f is a function variable, then fϕ a fϕ are formulas. Counting terms. Counting terms represent a number of x s of length t satisfying a formula ϕ. We denote it as #( x = t )ϕ, eventually x 1 = t 1 # x k = t k ϕ Counting terms are defined recursively - ϕ can contain another counting term. Counting term is a term. 6
11 1.5 Axioms of the theory We divide axioms into several categories. Axioms of the security parameter. In the paragraph describing objects of the theory we introduced the security parameter s - a string of length n such that each asymptotically true statement is true in n. We can axiomatize this parameter as follows. Denote by k term in the form of } {{ + 1 }. Then for k each k 0 there exists the axiom Sk in the form s k. In other words, length n of the security parameter s is greater than all k 0. Basic Axioms. Basic axioms is set of basic rules defining how we can work with variables and integers in our theory using the operations described above. They introduce properties, which are obvious, as well as properties, which may not be clear on the first view. Complete list of them can be found in the appendix, here we show just those which will be useful later. B10: x+1 = x +1 x+ = x +1. To clearly understand this axiom, it is sufficient to realize how we defined the correspondance between strings and integers. Let for example have a string x =1001, wich corresponds to a number 4. Then x + 1 = 49, x + = 50. These values correspond to strings a 10011, i.e. the string x extended by one bit from the right. Therefore x + 1 is a string x0 and x + is a string x1. This axiom is very important and we will use it later in definitions of functions. B1: (x 0 y 0) x y = x y. We have to just realize how we defined the operation and then use the correspondance strings integers. We have x y = x y. This value has in binary representation x y + 1 bits. Therefore using the dyadic notation the resulting string will have x y bits. B3: x 0 x = x What does this axiom say? Let have x in the form of x = b 1 b... b k. Then x 1 is a string in the form of b 1b... b k 1, i.e. string of length k 1. Thus this axiom says that by adding one bit the length of a string increases by 1. ( ) B33: y 0 x = y 1 (x + 1 = y x + = y) This axiom specifies the axiom B3. It says that, having two strings x,y of lengths k 1 and k which agree on first k 1 bits, then y is made from x by adding 0 or 1 to x. Again, very important axiom which will be used later for function definitions. Function axioms. For the possibility to work with polynomial functions, we need to have polynomial functions as values of function variables. The intended range of function variables is the class of non-uniform polynomial time functions. This can not be enforced by a first order theory, so we need to add enough axioms which will enforce some closure properties of this class. Let us now formulate these axioms. We define the axiom of projection (denote it PROJ(k, i)), which guarantees 7
12 existence of a function that returns projection to a given coordinate i: k i f k (x 1,..., x k ) (f(x 1,..., x k ) = x i ) We shall have the existence of the zero function - function, which returns the constant value 0, no matter on the input: k f k (x 1,..., x k ) (f(x 1,..., x k ) = 0) We then need to define the axiom of composition of functions (denote as COMP(k, l)). For each k, l we have g k+1 h k+l f k+l x y (f( x, y) = g ( x, h( x, y))) Thanks to this axiom we get the closure of polynomial function variables under the composition. The following axiom will also be very helpful in further text. It describes limited recursion on notation (we denote it as LRN axiom) and gives us an excelent way how to easily define many kinds of functions. We define it as follows: g k h k+ 1 h k+ b k+1 f k+1 x y( f( x, 0) = g( x) h 1 ( x, y, f( x, y))) b( x, y + 1) (f( x, y + 1) = h 1 ( x, y, f( x, y)))) h 1 ( x, y, f( x, y))) > b( x, y + 1) (f( x, y + 1) = b( x, y + 1)) h ( x, y, f( x, y))) b( x, y + ) (f( x, y + ) = h ( x, y, f( x, y)))) h ( x, y, f( x, y))) > b( x, y + ) (f( x, y + ) = b( x, y + ))) The axiom gives us a procedure how to expand strings by adding bits. We have some bounding function b, which gives us a value that cannot be exceeded. Each time we try to add one bit to actual string (0 or 1 to the string, denote that bit by a) and if it is possible (i.e. the value of function h a does not exceed b), we can use the recursion rule. Otherwise we use the bounding function b. We need to add one more axiom which is not explicitely presented in the original theory [IK03] but it is used there. This was first noticed by Emil Jeřábek in [J05]. The original theory does not involve functions that return a constant value, i.e. function c x = x. Thus we add also the axiom CONST(x) in the form x f f(0) = x This axiom also causes that the range of function variables is not the class of uniform polynomial time functions, but the class of non-uniform polynomial time functions. That is because it allows to substitute parameters into functions. The axiom will be very useful in further study, because it will allow us to replace in the formalization the class of adversaries that are randomized polynomial time algorithms by a larger class of non-uniform polynomial time algorithms (i.e.circuits). 8
13 Induction axiom. To have the possibility of using an induction proof technique, we need to define suitable axioms. We admit induction just for open formulas without quantifiers. We define induction using the length of dyadic notation of given induction variable. Denote induction axioms by a symbol LIND and define it for each open formula ϕ(x) and for each polynomial p as follows (ϕ(0) ( x < p( y ) (ϕ(x) ϕ(x + 1)))) x p( y ) ϕ(x). Counting axioms. We now formulate several axioms for using counting terms. First three of them are some sort of modification of the Kolmogorov axioms of probability. Kolmogorov s probabilistic approach says that for discrete probability space Ω we have at most countably many elements ω Ω. So the Kolmogorov axiom of normality which says that Pr[Ω] = 1, can be translated into our theory as (C1) #( x = y )(ϕ(x) ϕ(x)) = y 1. Next Kolmogorov axiom tells us that Pr[ω] 0 pro ω Ω. In our theory we can rewrite it as (C) #( x = y )ϕ(x) 0. For the next counting axiom let us realize that for any formula ϕ it holds that { x; ϕ( x)} { x; ϕ( x)} =. So we can interpret the Kolmogorov axiom of aditivity as (C3) #( x = y )ϕ(x) = #( x = y )(ϕ(x) ψ(x))+#( x = y )(ϕ(x) ψ(x)). We have some straightforward corrolaries of these axioms. In classic probability theory we can derive from Kolmogorov axioms that for any event ω Ω it holds Pr[ω ] [0, 1]. Similarly from (C1), (C) it follows that #( x = y )ϕ(x) 0 #( x = y )ϕ(x) y 1. As a corrolary of the Kolmogorov axioms is often stated the monotony property. It means for two events ω 1, ω such that ω 1 ω it holds that Pr[ω 1 ] Pr[ω ]. For an interpretation in our theory let denote R := { x; ϕ(x)}, S := { x; ψ(x)}. We would like to have an analogous property ideally in the form #( x = r ) ϕ #( x = r ) ψ if R S. But to make it correct, we need to take into account two things: there has to exist the graph of an injective map M between x R and y S. This can be done within defining function r such that (x, y) M r( x, y) = 0. If such function exists, then we know that there exists a relation between x and y. this relation (between values from R and values from S) has to be injective We can formalize this by the formula INJ( u, v, r), where r is a function variable, saying that r is an injective mapping. ( x = u ) ( z = u ) ( y = v ) (r( x, y) = 0 r( z, y) = 0 x = y). 9
14 Then we can formulate the monotony property in our theory as follows: (C4) r(inj( u, v, r) ( x = u ) (ϕ( x) ( y = v ) (ψ( y) r( x, y) = 0))) #( x = u ) ϕ(x) #( y = v ) ψ(y). Encoding theory into first order logic. Let us make one important diggresion now. We would like to encode T into first-order logic. How can we do that? Let us realize that we can index polynomial functions of arity k using clocked Turing machine. Having that we can view application of k-ary function as a function ap k ([i, k], s 1,..., s k ). Such function then represents an application of Turing machine i with bound n k to inputs s 1,..., s k. In this way we can translate all function variables from T. Counting terms can be translated as follows. For every formula ϕ and for every k > 0 we can take a function ct k ϕ(s 1,..., s k ) representing the number of tuples [x 1,..., x k ] with lengths x i s i which satisfies ϕ. 1.6 Properties of counting terms We have presented basic counting axioms needed to think about probabilities in our model. Let us make one abbreviation now. By a symbol T we mean the theory we just presented, i.e. objects along with the language, terms, formulas and axioms. For the next part we need the notion deriving in T. For formulas ϕ, ψ in T we denote T, ϕ ψ a statement that ψ can be derived from ϕ using basic, function, induction and counting axioms of T along with the rules and operations of T. Now we present some other properties of counting terms, which can be derived in T. We will use the following abbreviations. Let C be a counting term in the form of #( x = t )ϕ. By a term #( y = u )C we mean the term #( y = u x = t )ϕ. By a symbol ϕ[t/x] we mean the substitution by t for all free occurances of x in a formula ϕ. Lemma 1.1. ([IK03]). T x = 0 x = 0. Proof: Let suppose for contradiction that x 0. According to (B3) we have x = x Denote k := x 1. Then x = k + 1. We also have x = 0. Hence k + 1 = 0. By (B3) is k + 1 = 0 k, therefore k + 1 k. By (B6) is k + 1 k k k + 1. If k > k + 1, then we would get contradiction with (B1), because k k. Hence k k + 1. Thus using (B7) we get y = y + 1, which leads to contradiction with (B). Lemma 1.. (C5)([IK03]). For any counting term C it holds that T #( x = 0)C = C(0). Proof: Let take a term C = #( x = t )ϕ. Then we have #( x = 0)C = #( x = 0 x = t )ϕ, C(0) = #( x = t )ϕ[0/x]. For any x, x such that x = 0 x = t ϕ we have from Lemma 1.1 x = 0, so x satisfies ϕ[0/x]. Therefore from (C4) follows, that #( x = 0)C C [0/x]. 10
15 It also holds reverse inequality, because by (B9) is 0 = 0 and we get ( x = t )(ϕ[0/x] ( y = 0 )C(y)). Tedy #( x = t )ϕ[0/x] #( y = 0)C, which we wanted. Lemma 1.3. (C6)([IK03]). For any counting term C it holds that T #( x = y + 1)C = #( x = y )C[x + 1/x] + #( x = y )C[x + /x] Proof: Let take C in the form #( x = t )ϕ. We show at first the inequality, in other words #( x = y )C[x + 1/x] + #( x = y )C[x + /x] #( x = y + 1 x = y )ϕ. Suppose that x = y x = t ϕ[x + 1/x], x = y x = y ϕ[x + /x]. Let take z 1 = x + 1, z = x +. Then z1 1 z 1 = y + 1 x = y ϕ[z 1 /x] z 1 = + 1 z 1 z = y + 1 x = y ϕ[z /x] z = + 1. We then take r 1 ({x, x}, {z 1, y}) = 0 z 1 = x + 1 x = y r ({x, x}, {z, y}) = 0 z = x + x = y. From axiom (C4) (using functions r 1, r ) we get that #( x = y )C[x + 1/x] #( x = y + 1 x = y ) ( ϕ x = ( #( x = y )C[x + /x] #( x = y + 1 x = y ) ϕ x = Using axiom (B33) we get #( x = y )C[x+/x] #( x = y +1 x = y ) ( ( ϕ x = The requested inequality then follows directly from axiom (C3). x 1 x 1 x 1 ) + 1 ) +. By axiom (C3) we have ( ) x 1 #( x = y + 1)C #( x = y + 1 x y ) ϕ x = ( ( )) x 1 #( x = y + 1 x y ) ϕ x = + 1. Using (C4) we have #( x = y + 1 x y ) ( ϕ x = 11 x 1 ) + 1 C[x + 1/x]. )) + 1.
16 And finally, accorging to (B33) and (C4), we get ( ( x 1 #( x = y + 1 x y ) ϕ x = That is what we wanted. )) + 1 C[x + /x]. Lemma 1.4. (C7)([IK03]). For every counting term C which contains x but not as a free variable it holds #( x = y )C = (y 1) C. Proof: We prove this by induction using the axiom LIND. At first by property (C5) we have #( x = 0)C = C[0/x]. By assumption x is not free in C, hence C[0/x] = C = (0 1)C. For the induction step let suppose that for y = k it holds that #( x = y )C = (y 1) C. Then by (C6) we have #( x = y + 1 )C = #( x = y )C[x + 1/x] + #( x = y )C[x + /x]. But x is not free in C, so we can write #( x = y )C[x + 1/x] + #( x = y )C[x + /x] =#( x = y )C+ By induction assumption +#( x = y )C. #( x = y )C + #( x = y )C = (y 1) C + (y 1)C = (y 1) C. By axiom (B14) we can write (y 1) C = ((y + 1) 1) C. Lemma then directly follows by LIND axiom. Lemma 1.5. (C8) T z = y #( x = y x = y )(x = z ϕ) = #( x = y )ϕ[z/x]. Proof: It follows from (C4) that for any z #( x = y x = y )(x = z ϕ) #( x = y )ϕ[z/x]. From (C4) we also have z = y #( x = y )ϕ[z/x] #( x = y x = y )(x = z ϕ), which gives us the requested equality. 1
17 1.7 Soundness of the theory The main reason for introducing the theory is to have some formal system for verifying the soundness of cryptographic constructions and protocols based on some known cryptographic primitive. In the last chapter we will also present an example how to use this theory for a specific cryptographic construction. Now let us describe the way how we can verify constructions in general. Let us suppose that we have some cryptographic primitive, which can be expressed as a polynomial function f. Let us also suppose that we have another polynomial function f, which is defined from f. Our goal is to prove some cryptographic property of f. For example, f describes security of some well-known cryptographic primitive or cryptographic protocol, which is considered secure, and f then describes analogous security in a protocol that we are constructing. Let formula ϕ with inputs f, f (occuring as free variables) describes a relation between f and f. Suppose next that a formula ϕ 1 formalizes some cryptographic property of f. Usually it will be some cryptographic property of the primitive, for example that no polynomial time adversary on f has the probability better than negligible of breaking f. We can represent adversary as a polynomial function g. Then we can write that security property as a formula g zϕ 1 (f, g, z, s), where z are some (random) strings. We then want to prove that function f satisfies a cryptographic property expressed by a formula ψ. Usually it will be some cryptographical property in the constructed protocol similar to ϕ 1. For example, it could be again security of f, meaning that no polynomial adverory attacking on f has probability better than negligible of breaking f. We can again write down this property as a formula g zψ(f, g, z, s). If we can derive in T formula ψ from ϕ 1, ϕ with rules of T, then function f has the required property. That is the main idea of the soundness theorem. formulation, we need few more definitions. Before we present a formal Definition 1.1. We say that formula ϕ is bounded if it contains only bounded quantifiers, it means quantifiers in the form x( x y ϕ) or x( x y ϕ), where x is a string(or integer) variable, and no function quantifiers. Definition 1.. Let have formula ϕ( f, z, x) with free variables f, z, x. For any sequence of non-uniform polynomial functions α we say that ϕ holds asymptotically in N if for all sequences of polynomials p ( n 0 N s, t N : t > n 0 s i = p i ( x i ) ϕ[ α/ f, ) s/ z, t/x] holds in N. We now formulate the Soundness theorem. But the problem is that the theorem does not hold as stated in [IK03] with the new axiom CONST. The reason is that the original theorem holds only for class of uniform functions, but we have the possibility of non-uniform functions with CONST axiom. To make the Soundness theorem holds, we need to change the formulation little bit. The following 13
18 formulation comes from Emil Jeřábek s work [J05]. In this work the theorem is formulated for circuits, not for algorithms. But a polynomial sized circuit is equivalent to a non-uniform polynomial time algorithm. So we can talk about circuits instead of non-uniform polynomial time algorithms. In the following C denotes the size of circuit C. Theorem 1.6. (Soundness theorem)([j05]). Let ϕ and ψ be bounded formulas. Assume that T g zϕ( f, g, z, s) g zψ( f, g, z, s). Lef f FP/poly. If for every polynomial q(n), the formula ϕ( f, C, z, x) holds for all sufficiently large x and all z, C q( x ), then the same is true of ψ. The proof of the Soundness theorem is based on model theory by defining a little bit changed theory, for which there exists some model. It then uses properties of this model which implies that the theorem holds. Complete proof of the original theorem can be found in [IK03]. The proof of the modified theorem as we stated here can be found in [J05]. 14
19 . The Goldreich-Levin theorem In the previous chapter we presented the formal logic system, theory T, for formal reasoning about cryptographic constructions. We also promised to show an application of the theory for some well-known cryptographic concept. In the rest of this work we will try to formalize the Goldreich-Levin theorem and its proof in theory T. The theorem deals with classic cryptographic concepts such as hard-core predicates or one-way functions. Proof of the theorem uses some other concepts and terms from cryptography and probability theory. Let us start with a quick introduction into this field of cryptology..1 Hard-Core predicate One of the most important concepts in constructing pseudorandom generators and secure cryptographical protocols are hard-core predicates and one-way functions. Consider at first a function ε : N N. This function is called negligible if and only if k 1 n k n n k ɛ(n) < 1 n k. For instance functions n, n log n are negligible functions, but n 100 is not. Next consider a function f : {0, 1} n {0, 1} m. We call it one-way if f(x), for x {0, 1} n, can be computed by a polynomial time algorithm but for every randomized algorithm A, that runs in time polynomial in n, there exists a negligible function ε(n) such that n N P r x Un [ A(1 n, f(x)) f 1 (f(x)) ] ε(n), where U n is the uniform distribution on inputs of length n. In other words, function f is one-way if we can compute it easily, but given an y from the range of f it is hard to compute value x such that f(x) = y. We can state some examples: the factoring problem (given a value pq determine one of the values p, q for chosen primes p, q satisfying some condition), the discrete logarithm problem (determine x from given values {p, g, g x mod p}, where g is a generator of Zp). It is also fair to say that these functions are just assumed to be one-way. In fact, we don t even know whether any one-way function exists or not. But we know that their existence relates with the well-known P vs. NP problem. In particular, we know that the existence of a one-way function implies P NP. It can be showed by a contradiction argument. If P = NP then the class of deterministic polynomial time functions equals the class of nondeterministic polynomial time 15
20 functions. But there exists a nondeterministic polynomial time algorithm that inverts any polynomial-time computable function. It implies that no one-way function exists. However, we can not prove the opposite direction (that the existence of a one way function follows from P NP ). One could expect that if some function is hard to invert there should be some specific bits in input x, which are hard to compute given value f(x). But consider this example. Let us have a one-way function f and for any i = 0,..., n, define f i as f i (x) = x i f(x 0... x i 1 x i+1... x n 1 ). We see that f i is also one-way for each i, but we can easily determine i-th bit of the input. So after n iterations we can determine the whole input. Thus there is an obvious question: Is there some bit that can be extracted from x and is hard to compute given f(x)? We formalize this as follows: Definition.1. (Hard-Core bit) Let us have a one-way function f : {0, 1} {0, 1} and a predicate b : {0, 1} {0, 1}. We say that b is a hard-core predicate for the function f if for every probabilistic polynomial algorithm A P r x [A(f(x)) = b(x)] 1 + ε(n) where ε(n) is a negligible function and x U n is taken uniformly at random. At the end of this paragraph let us see one interesting theorem. See, for example, [P09] for a proof. Theorem.1. ([G01a]). Let f is a one-way permutation and b is a hard-core predicate of f. Then the function g : {0, 1} n {0, 1} n+1 defined as g(x) = f(x) b(x) is a pseudorandom generator. So it gives us a manual how to find a proper pseudorandom generator. Of course, only under the assumption that we have some one-way permutation.. The Goldreich-Levin theorem Another question which can come to mind is if there exists a hard-core bit for every one-way function. Answer is again unknown. But the situation is not so bad since Oded Goldreich and Leonid Levin have proved that every one-way function f can be transformed into another function g such that g is still one way function and g has a hard-core bit. We denote a, b the inner product modulo. Theorem.. ([GL89]). Let f is an one-way function. Let us define function g as g(x, r) = f(x) r for (x, r) {0, 1} n {0, 1} n. Then function b defined as b(x, r) = x, r is a hard-core predicate for the function g. 16
21 .3 Adaptation in our theory.3.1 Cryptographic terms In the previous paragraphs we showed the main ideas and concepts leading to the Goldreich-Levin theorem. We presented definitions in classic complexity theory. But we would like to use these concepts in automated systems. So we will now try to adapt these constructions in our formal logic system presented in the previous chapter. We will start with a negligible function. Recall that we have a formal logic system with a special constant n N, which we call the security parameter. We also have strings which can be viewed as integers using dyadic notation and function symbols for k ary functions, where k > 0. Because of the basic property of the security parameter that each asymptotically true statement is true in n and our intuition that (and some models have this property) all strings have the length bounded by a polynomial in n, we can transform the classic definition of a negligible function into the following one: Definition.. Function q : {0, 1} n {0, 1} n is negligible if and only if z x with x = n, q(x) 1 z. For the definition of a one-way fuction we need one more object from our theory - a counting integer #( x = k)ϕ(x). It denotes the number of strings x with length k satisfying ϕ(x). As we stated before, using this object we can reason about probabilities. We will use this idea right now. Definition.3. We say that function f : {0, 1} {0, 1} is one-way if A : {0, 1} {0, 1} z k N : #( x = k)(f(a(1 k, f(x))) = f(x)) #( x = k)(x = x) 1 z, where function variable A ranges non-uniform polynomial time algortihms (thanks to the CONST axiom). Similarily we can transform the definition of the hard-core predicate. Definition.4. Let f be a one-way function. Function b : {0, 1} {0, 1} is a hard core predicate for the function f if A : {0, 1} {0, 1} z k N : #( x = k)(a(1 k, f(x)) = b(x)) #( x = k)(x = x) z..3. Inner product Our current goal is to present an adaptation of the Goldreich-Levin theorem in our formal system. Theorem shows us how to find a hard-core predicate for a 17
22 specific type of one-way function. It says that a hard-core predicate has the form of the inner product modulo. So our first task is to find an interpretation of a function calculating the inner product modulo. Let us describe what does it really mean in our system. Ordinarily the inner product of two k-coordinated tuples is defined as the sum of products of the corresponding coordinates. Binary inner product modulo is defined identically using arithmetic modulo. In our theory we do not have binary representation, but dyadic representation. However, we will use the same logic. By the dyadic inner product we mean the number of ones modulo, which are common in both dyadic representation of the two numbers. In paricular, if we have two numbers x, y with dyadic representations x = a 1 a... a k, y = b 1 b... b l, min{k,l} we define their dyadic inner product as a k i+1 b l i+1 mod. How can we do that? Let us have numbers 0,0 which corresponds to ε, ε. Obviously the resulting inner product is ε. The same is for the inner product of 0 and any other number. Let now have strings b 1 b... b r and c 1 c... c r. We denote their inner product as d. Then the inner product of b 1 b... b r b r+1 and c 1 c... c r c r+1 is equal to d + 1 mod if and only if b r+1 = c r+1 = 1. In other cases the inner product will be equal to d. In other words we can compute the inner product recursively by induction on the notation of two numbers. If we have two numbers k, l, represented by strings k 1 k... k r, l 1 l... l s in dyadic notation, we can compute their inner product as inner product of k 1 and l 1 i=1 added by 1 if k r = l r = 1 and by 0 in other cases. We can continue in this way until we end up with the inner product of 0 and any number which is, as we noticed, 0. We can summarize this procedure in the following definition. Definition.5. Let k, l are numbers in our theory in the form of k = x + c, l = y + d, where c, d {1, }. Then we define their inner product (denoted as k l or IP (k, l)) as follows: Example: 0 0 = 0 0 y = 0 y 0 = 0 x + 1 y + 1 = x y x + 1 y + = x y x + y + 1 = x y x + y + = x y + 1 Let us show this definition on some example. For instance we take 18
23 k = 14, l = 1. Then k l = ( 6 + ) ( 5 + ) = = ( + ) ( + 1) + 1 = + 1 = ( 0 + ) ( 0 + ) + 1 = = 0 + = The definition holds also for numbers which have dyadic representations with different lengths. Then their inner product is the inner product on their common part. In particular, it means that if we have k = k 1 k... k r, l = l 1 l... l s, where r < s, then IP (k, l) = IP (k 1 k... k r, l s r+1... l s ). That is because we count it recursively starting at the last bit. For example 10 9 = ( 4+) ( 14+1) = 4 14 = ( 1+) ( 6+) = = ( 0+1) ( +)+1 = 0 +1 = 1. Note: Let us realize some properties of inner product: IP (k, l) 0. Moreover IP (k, l) 1 k and l are both even. IP (k, k) denotes number of ones in dyadic representation of k. Now we are able to compute the inner product of arbitrary two numbers. This value is a number between 0 and a length of the smaller of these two numbers. But we would like to simulate calculation of inner product modulo - to use it later for defining a hard-core predicate. So we need to restrict range of inner product just to numbers, 1 (in dyadic bits 1,0). To achieve this, we construct function which makes from number m value (m mod ). Definition.6. Using LRN axiom we define function symbol drop as follows: drop(x, 0) = x drop(x, y + 1) = drop(x,y) 1 drop(x, y + ) = drop(x,y) 1 drop(x, y) x. What does drop exactly do? Denote r the number of bits in dyadic notation of y. Function drop(x, y) extracts last r bits from the dyadic representation of x and returns the remaining part. If y has in dyadic more bits than x, then drop(x, y) returns ε. Example: Let us show on an example how this function works. Let k = 1, l = 3. Then drop(k, l) = drop(1, 3) = drop(1, 1 + 1) drop(1, 1) 1 drop(1, 0 + 1) 1 = = drop(1,0) = 1 = 4 = = 19
24 In dyadic k =101. Length of l is, so drop(1, 3) should return string 1 which fits. Note: We can see that drop(x, y) does not depend on value y. The only thing on which drop(x, y) depends is the number of bits in dyadic representation of y, not its concrete value. It means that for instance drop(1, 3) = drop(1, 4) = drop(1, 5) = drop(1, 6) =. Now we would like to define a function which returns the last bit in the dyadic representation of some number. This operation can be defined using the drop function. At first define the inverse function to drop. Definition.7. Function Iv is defined as follows: Iv(x, y) = x drop(x, y) (1 y). Example: Function Iv is inverse of the function drop. For example let us take x = 0, y = 11. Then drop(x, y) = 1. Meaning that if we extract from , the remainings is 1 0. We also have Iv(x, y) = Iv(0, 11) = 0 drop(0, 11) (1 11) = = 1 101, which is exactly that part of x, which we extracted using drop. Using these two functions we are now able to find any particular part of the dyadic representation of arbitrary number. If we have string x with length k, then we denote { x if i k y {1...i} = drop(x, (k i) 1) otherwise y {i...k} = { 0 if i > k Iv(x, (k (i 1)) 1) otherwise Our goal is to extract the last bit from the dyadic representation which can be easily done using previous notation: x k = Iv(x, (k (k 1)) 1 = Iv(x, 1 1) = Iv(x, ). Now we are able to determine whether the last bit of some string is 0 or 1. We need this to compute the inner product mod. The problem is that using the above calculation for inner product we get just some string. If we would compute last bit of its dyadic representation, we get 0 or 1, but this would not be value, which we wanted to get. To get mod value, we need to determine the last bit of binary representation not dyadic representation. So our task now is to find a way how to get the binary representation of some string. For example, for x = 3, which corresponds to 00 in dyadic, we would like to get binary representation of x, i.e. 11, which corresponds to 6. The following table specifies the situation. 0
25 number string binary number 0 ε Definition.8. We define function Ib as follows: Ib(x) = x + (x 1) 1. Such a function does exactly what we would like to achieve. For example for x = 8 we have Ib(x) = 8 + ( 8 ) 1 = = 3. 8 corresponds to 001 and 3 corresponds to 1000, which is 8 in binary. Now we can sumarize the whole calculation of inner product mod. Definition.9. We define function symbol IP (also denoted ) as follows: IP (x, y) = { 0 if Iv(Ib(IP(x, y)), 1) = 0 1 if Iv(Ib(IP(x, y)), 1) = The theorem To formalize the Golreich-Levin theorem we need one more function. Definition.10. Function is defined using LRN as follows: x 0 = x x (y + 1) = (x y) + 1 x (y + ) = (x y) + x y x + y This function returns the concatenation of strings x and y. For example let x = 9, y = 13. Then x y = 9 ( 6 + 1) = (9 6) + 1 = (9 ( + )) + 1 = ( (9 ) + ) + 1 = ( (9 ( 0 + )) + ) + 1 = ( ( (9 0) + ) + ) + 1 = 85 We have x =010 and y =110. Concatenation of x and y is which corresponds to 85. 1
26 Now we can present a formalization of the Goldreich-Levin theorem in our logic system. Theorem.3. (The Goldreich-Levin theorem) Let f be a one-way function. Define function g as and define function b as g(x, r) = f(x) r for (x, r) {0, 1} n {0, 1} n, b(x, r) = IP (x, r). Then b(x, r) is a hard-core predicate for the one-way function g. We now have the formalization of the statement of the Golreich-Levin theorem in our theory. Our next task is to show that such statement can be proved in the theory as well. This will be the subject of the next chapter.
27 3. Proof of the Goldreich-Levin theorem In this chapter we will try to prove the Goldreich-Levin theorem in our formal system of logic. The proof will be based on the classic proof of the Goldreich- Levin theorem as presented in [G01b], but adapted to our system. 3.1 Idea of the original proof Let us recall the original proof first. We assume that there exists u {0, 1} n and function h : {0, 1} n {0, 1} such that Pr r [h(r)= u, r ] 1 + ε, where r is taken uniformly at random. We then want to construct a probabilistic polynomial time (in (ε 1, n)) algorithm (called the Goldreich-Levin algorithm), which uses the funcion h as an oracle and its task is to return u with probability at least Ω( ε ). n Then we can easily show that if such an algorithm exists, the Goldreich-Levin theorem holds. The main idea of constructing the algorithm can be described as follows. At first suppose the most simple situation when Pr r [h(r)= u, r ] = 1. In this case we can just ask for values h(e i ), i = 1... n, where e i is a vector, which has 1 on i th place and 0 otherwise. Then h(e i ) = u i. Next suppose that Pr r [h(r)= u, r ] > 3 + ε. 4 In this case we choose r and ask for h(r) h(r e i ). If h(r) = u, r and h(r e i ) = u, r e i, then h(r) h(r e i ) = x, e i = x i. We also have ( ) 1 P r r [h(r) = x, r h(r e i ) = x, r e i ] 1 4 ε = 1 + ε. We can now repeate this procedure sufficiently many times for random r to receive u i with high enough probability - by taking majority of the resulting values. This procedure is convenient for this case but not for the original assumption Pr r [h(r)= u, r ] 1 + ε. If we compute such probability as in the previous case, we get ( ) 1 P r r [h(r) = x, r h(r e i ) = x, r e i ] 1 ε = ε. We can not use the same argument as before because amplifying this process will not give us better probability. The reason for that is that we double the probability of error during each turn. A convenient solution is to ask just once for a value of h during each turn. We will ask just for h(r e i ) and we will guess values u, r. Denote these guesses as b i. We do this for some particular number of strings and take the majority value of results. At the end we will get the succes probability at least 1 1 4n. Combining this with the probability that all our guesses are right, we get sufficiently good probability that the algorithm outputs u. We will give more details later in this chapter. Now our task is to transform this proof to our system. 3
Pseudorandom Generators
Principles of Construction and Usage of Pseudorandom Generators Alexander Vakhitov June 13, 2005 Abstract In this report we try to talk about the main concepts and tools needed in pseudorandom generators
More informationA polytime proof of correctness of the Rabin-Miller algorithm from Fermat s Little Theorem
A polytime proof of correctness of the Rabin-Miller algorithm from Fermat s Little Theorem Grzegorz Herman and Michael Soltys November 24, 2008 Abstract Although a deterministic polytime algorithm for
More informationUnprovability of circuit upper bounds in Cook s theory PV
Unprovability of circuit upper bounds in Cook s theory PV Igor Carboni Oliveira Faculty of Mathematics and Physics, Charles University in Prague. Based on joint work with Jan Krajíček (Prague). [Dagstuhl
More information1 Cryptographic hash functions
CSCI 5440: Cryptography Lecture 6 The Chinese University of Hong Kong 23 February 2011 1 Cryptographic hash functions Last time we saw a construction of message authentication codes (MACs) for fixed-length
More informationWhere do pseudo-random generators come from?
Computer Science 2426F Fall, 2018 St. George Campus University of Toronto Notes #6 (for Lecture 9) Where do pseudo-random generators come from? Later we will define One-way Functions: functions that are
More information1 Cryptographic hash functions
CSCI 5440: Cryptography Lecture 6 The Chinese University of Hong Kong 24 October 2012 1 Cryptographic hash functions Last time we saw a construction of message authentication codes (MACs) for fixed-length
More information2. Two binary operations (addition, denoted + and multiplication, denoted
Chapter 2 The Structure of R The purpose of this chapter is to explain to the reader why the set of real numbers is so special. By the end of this chapter, the reader should understand the difference between
More informationCS294: Pseudorandomness and Combinatorial Constructions September 13, Notes for Lecture 5
UC Berkeley Handout N5 CS94: Pseudorandomness and Combinatorial Constructions September 3, 005 Professor Luca Trevisan Scribe: Gatis Midrijanis Notes for Lecture 5 In the few lectures we are going to look
More informationAxiomatic set theory. Chapter Why axiomatic set theory?
Chapter 1 Axiomatic set theory 1.1 Why axiomatic set theory? Essentially all mathematical theories deal with sets in one way or another. In most cases, however, the use of set theory is limited to its
More informationLogic: The Big Picture
Logic: The Big Picture A typical logic is described in terms of syntax: what are the legitimate formulas semantics: under what circumstances is a formula true proof theory/ axiomatization: rules for proving
More informationLecture 9 - One Way Permutations
Lecture 9 - One Way Permutations Boaz Barak October 17, 2007 From time immemorial, humanity has gotten frequent, often cruel, reminders that many things are easier to do than to reverse. Leonid Levin Quick
More informationMathematics 114L Spring 2018 D.A. Martin. Mathematical Logic
Mathematics 114L Spring 2018 D.A. Martin Mathematical Logic 1 First-Order Languages. Symbols. All first-order languages we consider will have the following symbols: (i) variables v 1, v 2, v 3,... ; (ii)
More informationLecture 22: Quantum computational complexity
CPSC 519/619: Quantum Computation John Watrous, University of Calgary Lecture 22: Quantum computational complexity April 11, 2006 This will be the last lecture of the course I hope you have enjoyed the
More informationDatabase Theory VU , SS Complexity of Query Evaluation. Reinhard Pichler
Database Theory Database Theory VU 181.140, SS 2018 5. Complexity of Query Evaluation Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 17 April, 2018 Pichler
More informationFirst-Order Logic. 1 Syntax. Domain of Discourse. FO Vocabulary. Terms
First-Order Logic 1 Syntax Domain of Discourse The domain of discourse for first order logic is FO structures or models. A FO structure contains Relations Functions Constants (functions of arity 0) FO
More informationLectures One Way Permutations, Goldreich Levin Theorem, Commitments
Lectures 11 12 - One Way Permutations, Goldreich Levin Theorem, Commitments Boaz Barak March 10, 2010 From time immemorial, humanity has gotten frequent, often cruel, reminders that many things are easier
More information1 Distributional problems
CSCI 5170: Computational Complexity Lecture 6 The Chinese University of Hong Kong, Spring 2016 23 February 2016 The theory of NP-completeness has been applied to explain why brute-force search is essentially
More informationLecture 15 & 16: Trapdoor Permutations, RSA, Signatures
CS 7810 Graduate Cryptography October 30, 2017 Lecture 15 & 16: Trapdoor Permutations, RSA, Signatures Lecturer: Daniel Wichs Scribe: Willy Quach & Giorgos Zirdelis 1 Topic Covered. Trapdoor Permutations.
More informationDiscrete Mathematics and Probability Theory Summer 2014 James Cook Note 5
CS 70 Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 5 Modular Arithmetic In several settings, such as error-correcting codes and cryptography, we sometimes wish to work over a
More informationContribution of Problems
Exam topics 1. Basic structures: sets, lists, functions (a) Sets { }: write all elements, or define by condition (b) Set operations: A B, A B, A\B, A c (c) Lists ( ): Cartesian product A B (d) Functions
More informationAverage Case Complexity: Levin s Theory
Chapter 15 Average Case Complexity: Levin s Theory 1 needs more work Our study of complexity - NP-completeness, #P-completeness etc. thus far only concerned worst-case complexity. However, algorithms designers
More information1 PSPACE-Completeness
CS 6743 Lecture 14 1 Fall 2007 1 PSPACE-Completeness Recall the NP-complete problem SAT: Is a given Boolean formula φ(x 1,..., x n ) satisfiable? The same question can be stated equivalently as: Is the
More informationPropositional Logic: Syntax
Logic Logic is a tool for formalizing reasoning. There are lots of different logics: probabilistic logic: for reasoning about probability temporal logic: for reasoning about time (and programs) epistemic
More informationLecture 11: Hash Functions, Merkle-Damgaard, Random Oracle
CS 7880 Graduate Cryptography October 20, 2015 Lecture 11: Hash Functions, Merkle-Damgaard, Random Oracle Lecturer: Daniel Wichs Scribe: Tanay Mehta 1 Topics Covered Review Collision-Resistant Hash Functions
More informationCS 151 Complexity Theory Spring Solution Set 5
CS 151 Complexity Theory Spring 2017 Solution Set 5 Posted: May 17 Chris Umans 1. We are given a Boolean circuit C on n variables x 1, x 2,..., x n with m, and gates. Our 3-CNF formula will have m auxiliary
More informationCPSC 467: Cryptography and Computer Security
CPSC 467: Cryptography and Computer Security Michael J. Fischer Lecture 21 November 15, 2017 CPSC 467, Lecture 21 1/31 Secure Random Sequence Generators Pseudorandom sequence generators Looking random
More informationCISC 876: Kolmogorov Complexity
March 27, 2007 Outline 1 Introduction 2 Definition Incompressibility and Randomness 3 Prefix Complexity Resource-Bounded K-Complexity 4 Incompressibility Method Gödel s Incompleteness Theorem 5 Outline
More informationFoundations of Mathematics MATH 220 FALL 2017 Lecture Notes
Foundations of Mathematics MATH 220 FALL 2017 Lecture Notes These notes form a brief summary of what has been covered during the lectures. All the definitions must be memorized and understood. Statements
More informationLecture 3: Randomness in Computation
Great Ideas in Theoretical Computer Science Summer 2013 Lecture 3: Randomness in Computation Lecturer: Kurt Mehlhorn & He Sun Randomness is one of basic resources and appears everywhere. In computer science,
More informationNP, polynomial-time mapping reductions, and NP-completeness
NP, polynomial-time mapping reductions, and NP-completeness In the previous lecture we discussed deterministic time complexity, along with the time-hierarchy theorem, and introduced two complexity classes:
More information1 Randomized Computation
CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at
More informationPseudorandom Generators
8 Pseudorandom Generators Great Ideas in Theoretical Computer Science Saarland University, Summer 2014 andomness is one of the fundamental computational resources and appears everywhere. In computer science,
More informationCSCI3390-Lecture 14: The class NP
CSCI3390-Lecture 14: The class NP 1 Problems and Witnesses All of the decision problems described below have the form: Is there a solution to X? where X is the given problem instance. If the instance is
More informationThis is logically equivalent to the conjunction of the positive assertion Minimal Arithmetic and Representability
16.2. MINIMAL ARITHMETIC AND REPRESENTABILITY 207 If T is a consistent theory in the language of arithmetic, we say a set S is defined in T by D(x) if for all n, if n is in S, then D(n) is a theorem of
More informationComputer Science A Cryptography and Data Security. Claude Crépeau
Computer Science 308-547A Cryptography and Data Security Claude Crépeau These notes are, largely, transcriptions by Anton Stiglic of class notes from the former course Cryptography and Data Security (308-647A)
More informationLecture 9 Julie Staub Avi Dalal Abheek Anand Gelareh Taban. 1 Introduction. 2 Background. CMSC 858K Advanced Topics in Cryptography February 24, 2004
CMSC 858K Advanced Topics in Cryptography February 24, 2004 Lecturer: Jonathan Katz Lecture 9 Scribe(s): Julie Staub Avi Dalal Abheek Anand Gelareh Taban 1 Introduction In previous lectures, we constructed
More informationThe Shortest Vector Problem (Lattice Reduction Algorithms)
The Shortest Vector Problem (Lattice Reduction Algorithms) Approximation Algorithms by V. Vazirani, Chapter 27 - Problem statement, general discussion - Lattices: brief introduction - The Gauss algorithm
More informationNOTES ON FINITE FIELDS
NOTES ON FINITE FIELDS AARON LANDESMAN CONTENTS 1. Introduction to finite fields 2 2. Definition and constructions of fields 3 2.1. The definition of a field 3 2.2. Constructing field extensions by adjoining
More informationEquational Logic. Chapter Syntax Terms and Term Algebras
Chapter 2 Equational Logic 2.1 Syntax 2.1.1 Terms and Term Algebras The natural logic of algebra is equational logic, whose propositions are universally quantified identities between terms built up from
More informationRandomness and non-uniformity
Randomness and non-uniformity JASS 2006 Course 1: Proofs and Computers Felix Weninger TU München April 2006 Outline Randomized computation 1 Randomized computation 2 Computation with advice Non-uniform
More informationLecture 10 - MAC s continued, hash & MAC
Lecture 10 - MAC s continued, hash & MAC Boaz Barak March 3, 2010 Reading: Boneh-Shoup chapters 7,8 The field GF(2 n ). A field F is a set with a multiplication ( ) and addition operations that satisfy
More informationCS632 Notes on Relational Query Languages I
CS632 Notes on Relational Query Languages I A. Demers 6 Feb 2003 1 Introduction Here we define relations, and introduce our notational conventions, which are taken almost directly from [AD93]. We begin
More informationCPSC 467b: Cryptography and Computer Security
CPSC 467b: Cryptography and Computer Security Michael J. Fischer Lecture 10 February 19, 2013 CPSC 467b, Lecture 10 1/45 Primality Tests Strong primality tests Weak tests of compositeness Reformulation
More informationSeminaar Abstrakte Wiskunde Seminar in Abstract Mathematics Lecture notes in progress (27 March 2010)
http://math.sun.ac.za/amsc/sam Seminaar Abstrakte Wiskunde Seminar in Abstract Mathematics 2009-2010 Lecture notes in progress (27 March 2010) Contents 2009 Semester I: Elements 5 1. Cartesian product
More informationBU CAS CS 538: Cryptography Lecture Notes. Fall itkis/538/
BU CAS CS 538: Cryptography Lecture Notes. Fall 2005. http://www.cs.bu.edu/ itkis/538/ Gene Itkis Boston University Computer Science Dept. Notes for Lectures 3 5: Pseudo-Randomness; PRGs 1 Randomness Randomness
More informationIntroduction to Algorithms
Lecture 1 Introduction to Algorithms 1.1 Overview The purpose of this lecture is to give a brief overview of the topic of Algorithms and the kind of thinking it involves: why we focus on the subjects that
More informationChapter One. The Real Number System
Chapter One. The Real Number System We shall give a quick introduction to the real number system. It is imperative that we know how the set of real numbers behaves in the way that its completeness and
More informationLecture 22. We first consider some constructions of standard commitment schemes. 2.1 Constructions Based on One-Way (Trapdoor) Permutations
CMSC 858K Advanced Topics in Cryptography April 20, 2004 Lecturer: Jonathan Katz Lecture 22 Scribe(s): agaraj Anthapadmanabhan, Ji Sun Shin 1 Introduction to These otes In the previous lectures, we saw
More informationCOS598D Lecture 3 Pseudorandom generators from one-way functions
COS598D Lecture 3 Pseudorandom generators from one-way functions Scribe: Moritz Hardt, Srdjan Krstic February 22, 2008 In this lecture we prove the existence of pseudorandom-generators assuming that oneway
More informationIntroduction to Metalogic
Philosophy 135 Spring 2008 Tony Martin Introduction to Metalogic 1 The semantics of sentential logic. The language L of sentential logic. Symbols of L: Remarks: (i) sentence letters p 0, p 1, p 2,... (ii)
More informationCompute the Fourier transform on the first register to get x {0,1} n x 0.
CS 94 Recursive Fourier Sampling, Simon s Algorithm /5/009 Spring 009 Lecture 3 1 Review Recall that we can write any classical circuit x f(x) as a reversible circuit R f. We can view R f as a unitary
More informationPermutation Generators Based on Unbalanced Feistel Network: Analysis of the Conditions of Pseudorandomness 1
Permutation Generators Based on Unbalanced Feistel Network: Analysis of the Conditions of Pseudorandomness 1 Kwangsu Lee A Thesis for the Degree of Master of Science Division of Computer Science, Department
More informationGeneralized Lowness and Highness and Probabilistic Complexity Classes
Generalized Lowness and Highness and Probabilistic Complexity Classes Andrew Klapper University of Manitoba Abstract We introduce generalized notions of low and high complexity classes and study their
More informationCSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010
CSC 5170: Theory of Computational Complexity Lecture 9 The Chinese University of Hong Kong 15 March 2010 We now embark on a study of computational classes that are more general than NP. As these classes
More informationLecture 7: The Polynomial-Time Hierarchy. 1 Nondeterministic Space is Closed under Complement
CS 710: Complexity Theory 9/29/2011 Lecture 7: The Polynomial-Time Hierarchy Instructor: Dieter van Melkebeek Scribe: Xi Wu In this lecture we first finish the discussion of space-bounded nondeterminism
More informationPseudorandom Generators
Outlines Saint Petersburg State University, Mathematics and Mechanics 2nd April 2005 Outlines Part I: Main Approach Part II: Blum-Blum-Shub Generator Part III: General Concepts of Pseudorandom Generator
More informationNotes for Lecture Notes 2
Stanford University CS254: Computational Complexity Notes 2 Luca Trevisan January 11, 2012 Notes for Lecture Notes 2 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation
More informationA Note on Negligible Functions
Appears in Journal of Cryptology Vol. 15, 2002, pp. 271 284. Earlier version was Technical Report CS97-529, Department of Computer Science and Engineering, University of California at San Diego, March
More informationLecture 26: Arthur-Merlin Games
CS 710: Complexity Theory 12/09/2011 Lecture 26: Arthur-Merlin Games Instructor: Dieter van Melkebeek Scribe: Chetan Rao and Aaron Gorenstein Last time we compared counting versus alternation and showed
More informationLecture 2: Program Obfuscation - II April 1, 2009
Advanced Topics in Cryptography Lecture 2: Program Obfuscation - II April 1, 2009 Lecturer: S. Goldwasser, M. Naor Scribe by: R. Marianer, R. Rothblum Updated: May 3, 2009 1 Introduction Barak et-al[1]
More informationLogic. Propositional Logic: Syntax
Logic Propositional Logic: Syntax Logic is a tool for formalizing reasoning. There are lots of different logics: probabilistic logic: for reasoning about probability temporal logic: for reasoning about
More information1 Number Theory Basics
ECS 289M (Franklin), Winter 2010, Crypto Review 1 Number Theory Basics This section has some basic facts about number theory, mostly taken (or adapted) from Dan Boneh s number theory fact sheets for his
More information6.842 Randomness and Computation Lecture 5
6.842 Randomness and Computation 2012-02-22 Lecture 5 Lecturer: Ronitt Rubinfeld Scribe: Michael Forbes 1 Overview Today we will define the notion of a pairwise independent hash function, and discuss its
More informationImproved High-Order Conversion From Boolean to Arithmetic Masking
Improved High-Order Conversion From Boolean to Arithmetic Masking Luk Bettale 1, Jean-Sébastien Coron 2, and Rina Zeitoun 1 1 IDEMIA, France luk.bettale@idemia.com, rina.zeitoun@idemia.com 2 University
More informationVC-DENSITY FOR TREES
VC-DENSITY FOR TREES ANTON BOBKOV Abstract. We show that for the theory of infinite trees we have vc(n) = n for all n. VC density was introduced in [1] by Aschenbrenner, Dolich, Haskell, MacPherson, and
More information2 Evidence that Graph Isomorphism is not NP-complete
Topics in Theoretical Computer Science April 11, 2016 Lecturer: Ola Svensson Lecture 7 (Notes) Scribes: Ola Svensson Disclaimer: These notes were written for the lecturer only and may contain inconsistent
More informationShort Introduction to Admissible Recursion Theory
Short Introduction to Admissible Recursion Theory Rachael Alvir November 2016 1 Axioms of KP and Admissible Sets An admissible set is a transitive set A satisfying the axioms of Kripke-Platek Set Theory
More information: On the P vs. BPP problem. 30/12/2016 Lecture 12
03684155: On the P vs. BPP problem. 30/12/2016 Lecture 12 Time Hierarchy Theorems Amnon Ta-Shma and Dean Doron 1 Diagonalization arguments Throughout this lecture, for a TM M, we denote M t to be the machine
More informationLecture 23: Alternation vs. Counting
CS 710: Complexity Theory 4/13/010 Lecture 3: Alternation vs. Counting Instructor: Dieter van Melkebeek Scribe: Jeff Kinne & Mushfeq Khan We introduced counting complexity classes in the previous lecture
More information1 Computational Problems
Stanford University CS254: Computational Complexity Handout 2 Luca Trevisan March 31, 2010 Last revised 4/29/2010 In this lecture we define NP, we state the P versus NP problem, we prove that its formulation
More informationThe Indistinguishability of the XOR of k permutations
The Indistinguishability of the XOR of k permutations Benoit Cogliati, Rodolphe Lampe, Jacques Patarin University of Versailles, France Abstract. Given k independent pseudorandom permutations f 1,...,
More informationAntonina Kolokolova Memorial University of Newfoundland
6 0 1 5 2 4 3 Antonina Kolokolova Memorial University of Newfoundland Understanding limits of proof techniques: Diagonalization Algebrization (limits of arithmetization technique) Natural proofs Power
More informationALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers
ALGEBRA CHRISTIAN REMLING 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers by Z = {..., 2, 1, 0, 1,...}. Given a, b Z, we write a b if b = ac for some
More informationMODULAR ARITHMETIC KEITH CONRAD
MODULAR ARITHMETIC KEITH CONRAD. Introduction We will define the notion of congruent integers (with respect to a modulus) and develop some basic ideas of modular arithmetic. Applications of modular arithmetic
More informationCONSTRUCTION OF THE REAL NUMBERS.
CONSTRUCTION OF THE REAL NUMBERS. IAN KIMING 1. Motivation. It will not come as a big surprise to anyone when I say that we need the real numbers in mathematics. More to the point, we need to be able to
More informationAuthentication. Chapter Message Authentication
Chapter 5 Authentication 5.1 Message Authentication Suppose Bob receives a message addressed from Alice. How does Bob ensure that the message received is the same as the message sent by Alice? For example,
More informationNP-Completeness I. Lecture Overview Introduction: Reduction and Expressiveness
Lecture 19 NP-Completeness I 19.1 Overview In the past few lectures we have looked at increasingly more expressive problems that we were able to solve using efficient algorithms. In this lecture we introduce
More informationCSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010
CSC 5170: Theory of Computational Complexity Lecture 5 The Chinese University of Hong Kong 8 February 2010 So far our notion of realistic computation has been completely deterministic: The Turing Machine
More information2. Introduction to commutative rings (continued)
2. Introduction to commutative rings (continued) 2.1. New examples of commutative rings. Recall that in the first lecture we defined the notions of commutative rings and field and gave some examples of
More informationA Guide to Proof-Writing
A Guide to Proof-Writing 437 A Guide to Proof-Writing by Ron Morash, University of Michigan Dearborn Toward the end of Section 1.5, the text states that there is no algorithm for proving theorems.... Such
More informationIndistinguishability and Pseudo-Randomness
Chapter 3 Indistinguishability and Pseudo-Randomness Recall that one main drawback of the One-time pad encryption scheme and its simple encryption operation Enc k (m) = m k is that the key k needs to be
More informationIntroduction to Logic in Computer Science: Autumn 2006
Introduction to Logic in Computer Science: Autumn 2006 Ulle Endriss Institute for Logic, Language and Computation University of Amsterdam Ulle Endriss 1 Plan for Today Today s class will be an introduction
More informationNotes for Math 290 using Introduction to Mathematical Proofs by Charles E. Roberts, Jr.
Notes for Math 290 using Introduction to Mathematical Proofs by Charles E. Roberts, Jr. Chapter : Logic Topics:. Statements, Negation, and Compound Statements.2 Truth Tables and Logical Equivalences.3
More informationAutomata Theory and Formal Grammars: Lecture 1
Automata Theory and Formal Grammars: Lecture 1 Sets, Languages, Logic Automata Theory and Formal Grammars: Lecture 1 p.1/72 Sets, Languages, Logic Today Course Overview Administrivia Sets Theory (Review?)
More informationLecture 4 Chiu Yuen Koo Nikolai Yakovenko. 1 Summary. 2 Hybrid Encryption. CMSC 858K Advanced Topics in Cryptography February 5, 2004
CMSC 858K Advanced Topics in Cryptography February 5, 2004 Lecturer: Jonathan Katz Lecture 4 Scribe(s): Chiu Yuen Koo Nikolai Yakovenko Jeffrey Blank 1 Summary The focus of this lecture is efficient public-key
More informationOpleiding Informatica
Opleiding Informatica Tape-quantifying Turing machines in the arithmetical hierarchy Simon Heijungs Supervisors: H.J. Hoogeboom & R. van Vliet BACHELOR THESIS Leiden Institute of Advanced Computer Science
More informationLecture 6: Introducing Complexity
COMP26120: Algorithms and Imperative Programming Lecture 6: Introducing Complexity Ian Pratt-Hartmann Room KB2.38: email: ipratt@cs.man.ac.uk 2015 16 You need this book: Make sure you use the up-to-date
More informationPropositional Logic, Predicates, and Equivalence
Chapter 1 Propositional Logic, Predicates, and Equivalence A statement or a proposition is a sentence that is true (T) or false (F) but not both. The symbol denotes not, denotes and, and denotes or. If
More informationFrom Fixed-Length to Arbitrary-Length RSA Encoding Schemes Revisited
From Fixed-Length to Arbitrary-Length RSA Encoding Schemes Revisited Julien Cathalo 1, Jean-Sébastien Coron 2, and David Naccache 2,3 1 UCL Crypto Group Place du Levant 3, Louvain-la-Neuve, B-1348, Belgium
More informationLINDSTRÖM S THEOREM SALMAN SIDDIQI
LINDSTRÖM S THEOREM SALMAN SIDDIQI Abstract. This paper attempts to serve as an introduction to abstract model theory. We introduce the notion of abstract logics, explore first-order logic as an instance
More informationNumber Systems III MA1S1. Tristan McLoughlin. December 4, 2013
Number Systems III MA1S1 Tristan McLoughlin December 4, 2013 http://en.wikipedia.org/wiki/binary numeral system http://accu.org/index.php/articles/1558 http://www.binaryconvert.com http://en.wikipedia.org/wiki/ascii
More informationLaver Tables A Direct Approach
Laver Tables A Direct Approach Aurel Tell Adler June 6, 016 Contents 1 Introduction 3 Introduction to Laver Tables 4.1 Basic Definitions............................... 4. Simple Facts.................................
More informationA NOTE ON A YAO S THEOREM ABOUT PSEUDORANDOM GENERATORS
A NOTE ON A YAO S THEOREM ABOUT PSEUDORANDOM GENERATORS STÉPHANE BALLET AND ROBERT ROLLAND Abstract. The Yao s theorem gives an equivalence between the indistinguishability of a pseudorandom generator
More informationPOL502: Foundations. Kosuke Imai Department of Politics, Princeton University. October 10, 2005
POL502: Foundations Kosuke Imai Department of Politics, Princeton University October 10, 2005 Our first task is to develop the foundations that are necessary for the materials covered in this course. 1
More informationDRAFT. Algebraic computation models. Chapter 14
Chapter 14 Algebraic computation models Somewhat rough We think of numerical algorithms root-finding, gaussian elimination etc. as operating over R or C, even though the underlying representation of the
More informationLecture 4: Constructing the Integers, Rationals and Reals
Math/CS 20: Intro. to Math Professor: Padraic Bartlett Lecture 4: Constructing the Integers, Rationals and Reals Week 5 UCSB 204 The Integers Normally, using the natural numbers, you can easily define
More informationJASS 06 Report Summary. Circuit Complexity. Konstantin S. Ushakov. May 14, 2006
JASS 06 Report Summary Circuit Complexity Konstantin S. Ushakov May 14, 2006 Abstract Computer science deals with many computational models. In real life we have normal computers that are constructed using,
More informationLecture 14: Cryptographic Hash Functions
CSE 599b: Cryptography (Winter 2006) Lecture 14: Cryptographic Hash Functions 17 February 2006 Lecturer: Paul Beame Scribe: Paul Beame 1 Hash Function Properties A hash function family H = {H K } K K is
More informationMATH 115, SUMMER 2012 LECTURE 12
MATH 115, SUMMER 2012 LECTURE 12 JAMES MCIVOR - last time - we used hensel s lemma to go from roots of polynomial equations mod p to roots mod p 2, mod p 3, etc. - from there we can use CRT to construct
More informationLecture 7: More Arithmetic and Fun With Primes
IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Advanced Course on Computational Complexity Lecture 7: More Arithmetic and Fun With Primes David Mix Barrington and Alexis Maciel July
More information