On Finding Planted Cliques and Solving Random Linear Equa9ons. Math Colloquium. Lev Reyzin UIC

Size: px
Start display at page:

Download "On Finding Planted Cliques and Solving Random Linear Equa9ons. Math Colloquium. Lev Reyzin UIC"

Transcription

1 On Finding Planted Cliques and Solving Random Linear Equa9ons Math Colloquium Lev Reyzin UIC 1

2 random graphs 2

3 Erdős- Rényi Random Graphs G(n,p) generates graph G on n ver9ces by including each possible edge independently with probability p. G (4, 0.5) G(n,p) 3

4 Erdős- Rényi Random Graphs G(n,p) generates graph G on n ver9ces by including each possible edge independently with probability p. G(n,p) 4

5 Erdős- Rényi Random Graphs G(n,p) generates graph G on n ver9ces by including each possible edge independently with probability p. G (4, 0.5) G(n,p) 5

6 Typical Examples n = 100, p = 0.01 n = 100, p = 0.1 n = 100, p = 0.5 Created using sovware by Christopher Manning, available on hwp://bl.ocks.org/christophermanning/

7 Erdős- Rényi Random Graphs E- R random graphs are an interes9ng object of study in combinatorics. When does G have a giant component? When is G connected? How large is the largest clique in G? 7

8 Erdős- Rényi Random Graphs E- R random graphs are an interes9ng object of study in combinatorics. When does G have a giant component? when np c > 1 When is G connected? sharp connec9vity threshold at p = ln/n How large is the largest clique in G? for p=½, largest clique has size k(n) 2lg 2 (n) 8

9 w.h.p. for G ~ G(n,½), k(n) 2lg 2 (n) why? let X k be the number of cliques in G ~ G(n,.5) $ ' 2 k ( 2) Ε[ X K ] = n < 1 for k > 2lg 2 n k in fact, (for large n) the largest clique is almost certainly k(n) = 2lg 2 (n) or 2lg 2 (n)+1 [Matula 76] 9

10 10

11 11

12 Where is the largest clique? 12

13

14 Finding Large Cliques for worst- case graphs: finding largest clique is NP- Hard. (very very unlikely to have efficient algorithms) W[1]- Hard (very likely gets harder as target cliques grow) hard to approximate to n 1- ε for any ε > 0 (give up) Hope: in E- R random graphs, finding large cliques is easier. 14

15 Finding Large Cliques for worst- case graphs: finding largest clique is NP- Hard. (very very unlikely to have efficient algorithms) W[1]- Hard (very likely gets harder as target cliques grow) hard to approximate to n 1- ε for any ε > 0 (give up) Hope: in E- R random graphs, finding large cliques is easier. 15

16 Finding Large Cliques for worst- case graphs: finding largest clique is NP- Hard. (very very unlikely to have efficient algorithms) W[1]- Hard (very likely gets harder as target cliques grow) hard to approximate to n 1- ε for any ε > 0 (give up) Hope: in E- R random graphs, finding large cliques is easier. 16

17 Finding Large Cliques for worst- case graphs: finding largest clique is NP- Hard. (very very unlikely to have efficient algorithms) W[1]- Hard (very likely gets harder as target cliques grow) hard to approximate to n 1- ε for any ε > 0 (give up) hope: in E- R random graphs, finding large cliques is easier. 17

18 Finding Large Cliques in G ~ G(n,½) Finding a clique of size = lg 2 (n) is easy initialize T = Ø, S = V while (S Ø) { pick random v S and add v to T remove v and its non-neighbors from S } return T Conject ure [Karp 76]: no efficient method to find cliques of size (1+ε)lg 2 n in E- R random graphs. 18

19 Finding Large Cliques in G ~ G(n,½) Finding a clique of size = lg 2 (n) is easy initialize T = Ø, S = V while (S Ø) { pick random v S and add v to T remove v and its non-neighbors from S } return T Conjecture [Karp 76]: for any ε > 0, there s no efficient method to find cliques of size (1+ε)lg 2 n in E- R random graphs. 19

20 Finding Large Cliques in G ~ G(n,½) Finding a clique of size = lg 2 (n) is easy initialize T = Ø, S = V while (S Ø) { pick random v S and add v to T remove v and its non-neighbors from S } return T Conjecture [Karp 76]: for any ε > 0, there s no efficient method to find cliques of size (1+ε)lg 2 n in E- R random graphs. s9ll open (would imply P NP) 20

21 Summary In E- R random graphs clique of size 2lg 2 n exists can efficiently find clique of size lg 2 n likely cannot efficiently find cliques size (1+ε)lg 2 n What to do? 21

22 Summary In E- R random graphs clique of size 2lg 2 n exists can efficiently find clique of size lg 2 n likely cannot efficiently find cliques size (1+ε)lg 2 n What to do? make the problem easier by plan9ng a large clique to be found [Jerrum 92] 22

23 planted cliques 23

24 Planted Clique the process: G ~ G(n,p,k) 1. generate G ~ G(n,p) 2. add clique to random subset of k < n ver9ces of G Goal: given G ~ G(n,p,k), find the k ver9ces where the clique was planted (algorithm knows values: n,p,k) 24

25 Progress on Planted Clique For G ~ G(n,½,k), clearly no hope for k 2lg 2 n +1. For k > 2lg 2 n+1, there is an obvious n O(lg n) - algorithm: input: G from (n,1/2,k) with k > 2lg 2 n+1 1) Check all S V of size S =2lg 2 n+2 for S that induces a clique in G. 2) For each v V, if (v,w) is edge for all w in S: S = S {v} 3) return S What is the smallest value of k that we have a polynomial Unfortunately, 9me algorithm this is for? not Any polynomial guesses? 9me. 25

26 Progress on Planted Clique For G ~ G(n,½,k), clearly no hope for k 2lg 2 n +1. For k > 2lg 2 n+1, there is an obvious n O(lg n) - algorithm: input: G from (n,1/2,k) with k > 2lg 2 n+2 1) Check all S V of size S =2lg 2 n+2 for S that induces a clique in G. 2) For each v V, if (v,w) is edge for all w in S: S = S {v} 3) return S What is the smallest value of k that we have a polynomial 9me algorithm for? Any guesses? 26

27 State- of- the- Art for Polynomial Time k c (n lg n) 1/2 is trivial. The degrees of the ver9ces in the plant stand out. (proof via Hoeffding union bound) k = c n 1/2 is best so far. [Alon- Krivelevich- Sudokov 98] input: G from (n,1/2,k) with k 10 n 1/2 1) find 2 nd eigenvector v 2 of A(G) 2) Sort V by decreasing order of absolute values of coordinates of v 2. Let W be the top k vertices in this order. 3) Return Q, the set of vertices with ¾k neighbors in W 27

28 State- of- the- Art for Polynomial Time k c (n lg n) 1/2 is trivial. The degrees of the ver9ces in the plant stand out. (proof via Hoeffding union bound) k = c n 1/2 is best so far. [Alon- Krivelevich- Sudokov 98] input: G from (n,1/2,k) with k 10 n 1/2 1) find 2 nd eigenvector v 2 of A(G) 2) Sort V by decreasing order of absolute values of coordinates of v 2. Let W be the top k vertices in this order. 3) Return Q, the set of vertices with ¾k neighbors in W 28

29 State- of- the- Art for Polynomial Time k c (n lg n) 1/2 is trivial. The degrees of the ver9ces in the plant stand out. (proof via Hoeffding union bound) k = c n 1/2 is best so far. [Alon- Krivelevich- Sudokov 98] input: G from (n,1/2,k) with k 10 n 1/2 1) find 2 nd eigenvector Bipar9te vversion 2 of A(G) 2) Sort V by decreasing order of absolute values of coordinates of v 2. Let W be the top k vertices in this order. 3) Return Q, the set of vertices with ¾k neighbors in W 29

30 State- of- the- Art for Polynomial Time k c (n lg n) 1/2 is trivial. The degrees of the ver9ces in the plant stand out. (proof via Hoeffding union bound) k = c n 1/2 is best so far. [Alon- Krivelevich- Sudokov 98] input: G from (n,1/2,k) with k 10 n 1/2 1) find 2 nd eigenvector Bipar9te vversion 2 of A(G) 2) Sort V by decreasing order of absolute values of coordinates of v 2. Let W be the top k vertices in this order. 3) Return Q, the set of vertices with ¾k neighbors in W 30

31 State- of- the- Art for Polynomial Time k c (n lg n) 1/2 is trivial. The degrees of the ver9ces in the plant stand out. (proof via Hoeffding union bound) k = c n 1/2 is best so far. [Alon- Krivelevich- Sudokov 98] input: G from (n,1/2,k) with k 10 n 1/2 1) find 2 nd eigenvector Bipar9te vversion 2 of A(G) 2) Sort V by decreasing order of absolute values of coordinates of v 2. Let W be the top k vertices in this order. 3) Return Q, the recently set of used vertices as alternate with ¾k neighbors in W In fact, (bipar9te) planted clique was cryptographic primi9ve for k < n 1/2- ε. [Applebaum- Barak- Wigderson 09] 31

32 State- of- the- Art for Polynomial Time k c (n lg n) 1/2 is trivial. The degrees of the ver9ces in the plant stand out. (proof via Hoeffding union bound) k = c n 1/2 is best so far. [Alon- Krivelevich- Sudokov 98] input: G from (n,1/2,k) with k 10 n 1/2 1) find 2 nd eigenvector v 2 of A(G) 2) Sort V by decreasing order of absolute my values goal: explain of coordinates why there of has v 2 been. Let no W be the progress top k vertices on this problem in this past order. n 1/2. [FGVRX 13] 3) Return Q, the set of vertices with ¾k neighbors in W 32

33 State- of- the- Art for Polynomial Time k c (n lg n) 1/2 is trivial. The degrees of the ver9ces in the plant stand out. (proof via Hoeffding union bound) k = c n 1/2 is best so far. [Alon- Krivelevich- Sudokov 98] input: G from (n,1/2,k) with k 10 n 1/2 1) find 2 nd eigenvector v 2 of A(G) 2) Sort V by decreasing order of absolute my values goal: explain of coordinates why there of has v 2 been. Let no W be the progress top k vertices on this problem in this past order. n 1/2. [FGVRX 13] 3) Return Q, the set of vertices with ¾k neighbors in W But first we have to discuss solving linear systems 33

34 linear systems 34

35 m equa9ons Solving Linear Systems n variables Ax = b solve for n unknowns m results the linear equa9ons are over GF(2), ie {0,1} n 35

36 m equa9ons Solving Random Linear Systems n variables Ax = b m results 36

37 m equa9ons Solving Random Linear Systems n variables Ax = b $ m results 37

38 m equa9ons Solving Random Linear Systems n variables Ax = b $ $ m results 38

39 Ax = b Solving Random Linear Systems 39 n variables m equa9ons m results $ $ $

40 m equa9ons Solving Random Linear Systems n variables Ax = b $ x 1 x 2 x 3 $ m results $ 40

41 m equa9ons Solving Random Linear Systems n variables Ax = b $ x 1 x 2 x 3 $ choose any m = poly(n) solve for unique x in poly 9me. How? m results $ 41

42 m equa9ons A Twist n variables Ax = b $ x 1 x 2 x 3 $ m results 0 $ 0 1x choose any m = poly(n) solve for x (that generated original b) in poly 9me. How? entries of b flipped independently with prob. 1/100 42

43 m equa9ons A Twist n variables Ax = b $ x 1 x 2 x 3 $ entries of b flipped independently with prob. 1/100 m results 0 $ 0 1x choose any m = poly(n) solve for x (that generated original b) in poly 9me. it s a big open ques9on is theore9cal CS called LPN. 43

44 m equa9ons A Twist n variables Ax = b $ x 1 x 2 x 3 $ entries of b flipped independently with prob. 1/100 m results 0 $ 0 1x choose any m = poly(n) solve for x (that generated original b) in poly 9me. current best is 2 O(n/lgn) 9me. [BlumKW 00] 44

45 m equa9ons A Twist n variables Ax = b $ x 1 x 2 x 3 $ entries of b flipped independently with prob. 1/100 m results 0 $ 0 1x In fact, LPN was recently used as alternate cryptographic primi9ve. [Peikart 09] 45

46 learning theory 46

47 PAC Learning, in One Slide [Valiant 84] learner distribu9on f F 47

48 PAC Learning, in One Slide [Valiant 84] learner distribu9on data a 1 f F 48

49 PAC Learning, in One Slide [Valiant 84] learner distribu9on f F f(a 1 ) 49

50 PAC Learning, in One Slide [Valiant 84] learner distribu9on data a 2 f F 50

51 PAC Learning, in One Slide [Valiant 84] learner distribu9on f F f(a 2 ) 51

52 PAC Learning, in One Slide [Valiant 84] learner distribu9on data a 3 f F 52

53 PAC Learning, in One Slide [Valiant 84] learner distribu9on f F f(a 3 ) 53

54 PAC Learning, in One Slide [Valiant 84] learner distribu9on f F 54

55 PAC Learning, in One Slide [Valiant 84] learner distribu9on data a m f F 55

56 PAC Learning, in One Slide [Valiant 84] learner distribu9on f F f(a m ) 56

57 PAC Learning, in One Slide [Valiant 84] learner distribu9on f F f(a m ) 57

58 PAC Learning, in One Slide [Valiant 84] learner distribu9on f F f(a m ) 58

59 PAC Learning, in One Slide [Valiant 84] learner distribu9on data a m+1 f F 59

60 PAC Learning, in One Slide [Valiant 84] learner h(a m+1 ) distribu9on f F f(a m+1 ) 60

61 PAC Learning, in One Slide [Valiant 84] learner distribu9on data a m+2 f F 61

62 PAC Learning, in One Slide [Valiant 84] learner h(a m+2 ) distribu9on f F f(a m+2 ) 62

63 PAC Learning, in One Slide [Valiant 84] learner h(a m+2 ) distribu9on f F f(a m+2 ) 63

64 Learning Linear Func9ons (mod 2) m equa9ons n variables U = $ x $ 1 x 2 x 3 m results $ 64

65 Learning Linear Func9ons (mod 2) m equa9ons n variables U = $ x $ 1 x 2 x 3 m results $ 65

66 Learning Linear Func9ons (mod 2) m equa9ons n variables U = $ x $ 1 x 2 x 3 m results $ 66

67 Learning Linear Func9ons (mod 2) m equa9ons n variables = $ x $ 1 x 2 U x m results $ 67

68 Learning Linear Func9ons (mod 2) m equa9ons n variables U = $ x $ 1 x 2 x 3 m results $ 68

69 Learning Linear Func9ons (mod 2) m equa9ons n variables = $ x $ 1 x 2 x 3 x' 1 x' 2 x' 3 $ m results $ 69

70 Learning Linear Func9ons (mod 2) x $ 1 x 2 x 3 target f x' $ 1 x' 2 x' 3 hypothesis h Remember the coefficients of the equa9ons are generated uniformly at random from {0,1} n. So, if i s.t. x i x i, then f and h will disagree ½ of the 9me. Hence, 2 n different orthogonal func9ons. form the Fourier basis in DFA 70

71 When there s noise m equa9ons n variables $ x 1 x 2 x 3 = $ x m results $ 71

72 sta9s9cal queries 72

73 Sta9s9cal Query Learning [Kearns 93] n variables ( ) U x 1 x 2 x 3 $

74 Sta9s9cal Query Learning [Kearns 93] n variables ( ) x 1 q(a, f(a)) x U 2 x 3 $ where a {0,1} n a: {1,0} n {0,1} à {0,1}

75 Sta9s9cal Query Learning [Kearns 93] n variables ( ) x 1 q(a, f(a)) x U 2 x 3 $ chooses a from U where a {0,1} n a: {1,0} n {0,1} {0,1} q(a, f(a)) 0 or 1

76 Sta9s9cal Query Learning [Kearns 93] n variables ( ) x 1 q x U 2 x 3 $ poly(n) 0 or 1

77 Sta9s9cal Query Learning [Kearns 93] n variables ( ) x 1 q x U 2 x 3 $ poly(n) 0 or 1

78 Sta9s9cal Queries Theorem [Kearns 93]: If a family of func9ons is learnable with sta9s9cal queries, then it is learnable (in the original model) with noise Theorem [Kearns 93]: Linear func9ons (mod 2) are not learnable with sta9s9cal queries. proof idea: b/c the linear func5ons are orthogonal under U, queries are either uninforma5ve or eliminate one wrong linear func5on at a 5me (and there are 2 n ) 78

79 Sta9s9cal Queries Theorem [Kearns 93]: If a family of func9ons is learnable with sta9s9cal queries, then it is learnable (in the original model) with noise Theorem [Kearns 93]: Linear func9ons (mod 2) are not learnable with sta9s9cal queries. Theorem [Blum et al 94], when a family of func9ons has exponen9ally high SQ dim it is not learnable with sta9s9cal queries. SQ dim is roughly the number of nearly- orthogonal func9ons (wrt a reference distribu9on). Linear func9ons have SQ dimension = 2 n. 79

80 Sta9s9cal Queries Theorem [Kearns 93]: If a family of func9ons is learnable with sta9s9cal queries, then it is learnable (in the original model) with noise Theorem [Kearns 93]: Linear func9ons (mod 2) are not learnable with sta9s9cal queries. Theorem [Blum et al 94], when a family of func9ons has exponen9ally high SQ dim it is not learnable with sta9s9cal queries. Shockingly, almost all learning algorithms can be implemented w/ sta9s9cal queries So high SQ dim is a serious barrier to learning, especially under noise. 80

81 Summary Linear equa9ons with errors seem hard to solve (noisy linear func9ons over GF(2) seem hard to learn ) Sta9s9cal queries and sta9s9cal dimension from learning theory are an explana9on as to why. (almost all our learning algorithms are sta9s9cal) l 81

82 Summary Linear equa9ons with errors seem hard to solve (noisy linear func9ons over GF(2) seem hard to learn ) Sta9s9cal queries and sta9s9cal dimension from learning theory are an explana9on as to why. (almost all our learning algorithms are sta9s9cal) Idea: extend this framework to op9miza9on problems and use it to explain the hardness of planted clique 82

83 sta9s9cal algorithms [FGRVX 13] 83

84 Tradi9onal Algorithms input data output 84

85 Tradi9onal Algorithms input data processing please wait output 85

86 Tradi9onal Algorithms input data output 86

87 Sta9s9cal Algorithms input data output 87

88 Sta9s9cal Algorithms input data q: {0,1} output 88

89 Sta9s9cal Algorithms input data q: {0,1} output 89

90 Sta9s9cal Algorithms input data q: {0,1} output q( ) 90

91 Sta9s9cal Algorithms input data q poly(n) processing please wait output q( ) 91

92 Sta9s9cal Algorithms input data q poly(n) output q( ) 92

93 Sta9s9cal Algorithms input data q poly(n) output q( ) Turns out most (all?) current op9miza9on algorithms have sta9s9cal analogues 93

94 Bipar9te Planted Clique G $ A(G) 94

95 Bipar9te Planted Clique w.p. (n- k)/n w.p. k/n each row is random is random, except in plant coordinates $ A(G) 95

96 Bipar9te Planted Clique w.p. (n- k)/n w.p. k/n each row is random is random, except in plant coordinates $ A(G) 96

97 Sta9s9cal Algorithms for BPC A(G) $ 97

98 Sta9s9cal Algorithms for BPC A(G) $ 98

99 Sta9s9cal Algorithms for BPC q: {0,1} n à {0,1} A(G) $ 99

100 Sta9s9cal Algorithms for BPC q(1,0,1,1,0,1,1) $ q: {0,1} n à {0,1} A(G) 100

101 Sta9s9cal Algorithms for BPC q(1,0,1,1,0,1,1) $ q: {0,1} n à {0,1} A(G) 101

102 Sta9s9cal Algorithms for BPC poly(n) $ A(G) 102

103 Results Extension of sta9s9cal query model to op9miza9on. Proving 9ghter, more general, lower bounds, which apply to learning also. Gives a new tool for showing problems are difficult. 103

104 Results Main result (almost): for any ε>0, no poly 9me sta9s9cal algorithm using o(n 2 /k 2 ) samples, can find planted cliques of size n 1/2- ε. intui5on: many planted clique distribu9ons with small overlap (nearly orthogonal in some sense), which are hard to tell from normal E- R graphs. Implies that many ideas will fail to work, including Markov chain approaches [Frieze- Kannan 03] for our version of the problem. Since this work, an integrality gap of n 1/2 was shown for planted clique, giving further evidence for its hardness. [Meka- Wigderson 13] 104

105 Results Main result (almost): for any ε>0, no poly 9me sta9s9cal algorithm using o(n 2 /k 2 ) samples, can find planted cliques of size n 1/2- ε. intui5on: many planted clique distribu9ons with small overlap (nearly orthogonal in some sense), which are hard to tell from normal E- R graphs. Implies that many ideas will fail to work, including Markov chain approaches [Frieze- Kannan 03] for our version of the problem. Since this work, an integrality gap of n 1/2 was shown for planted clique, giving further evidence for its hardness. [Meka- Wigderson 13] 105

106 Any Ques9ons? 106

On Finding Planted Cliques in Random Graphs

On Finding Planted Cliques in Random Graphs On Finding Planted Cliques in Random Graphs GraphEx 2014 Lev Reyzin UIC joint with Feldman, Grigorescu, Vempala, Xiao in STOC 13 1 Erdős- Rényi Random Graphs G(n,p) generates graph G on n vertces by including

More information

TTIC An Introduction to the Theory of Machine Learning. Learning from noisy data, intro to SQ model

TTIC An Introduction to the Theory of Machine Learning. Learning from noisy data, intro to SQ model TTIC 325 An Introduction to the Theory of Machine Learning Learning from noisy data, intro to SQ model Avrim Blum 4/25/8 Learning when there is no perfect predictor Hoeffding/Chernoff bounds: minimizing

More information

Order- Revealing Encryp2on and the Hardness of Private Learning

Order- Revealing Encryp2on and the Hardness of Private Learning Order- Revealing Encryp2on and the Hardness of Private Learning January 11, 2016 Mark Bun Mark Zhandry Harvard MIT Let s do some science! Scurvy: a problem throughout human history Caused by vitamin C

More information

Computational Learning Theory

Computational Learning Theory Computational Learning Theory Slides by and Nathalie Japkowicz (Reading: R&N AIMA 3 rd ed., Chapter 18.5) Computational Learning Theory Inductive learning: given the training set, a learning algorithm

More information

Average-case Analysis for Combinatorial Problems,

Average-case Analysis for Combinatorial Problems, Average-case Analysis for Combinatorial Problems, with s and Stochastic Spanning Trees Mathematical Sciences, Carnegie Mellon University February 2, 2006 Outline Introduction 1 Introduction Combinatorial

More information

Web-Mining Agents Computational Learning Theory

Web-Mining Agents Computational Learning Theory Web-Mining Agents Computational Learning Theory Prof. Dr. Ralf Möller Dr. Özgür Özcep Universität zu Lübeck Institut für Informationssysteme Tanya Braun (Exercise Lab) Computational Learning Theory (Adapted)

More information

What Can We Learn Privately?

What Can We Learn Privately? What Can We Learn Privately? Sofya Raskhodnikova Penn State University Joint work with Shiva Kasiviswanathan Homin Lee Kobbi Nissim Adam Smith Los Alamos UT Austin Ben Gurion Penn State To appear in SICOMP

More information

On Noise-Tolerant Learning of Sparse Parities and Related Problems

On Noise-Tolerant Learning of Sparse Parities and Related Problems On Noise-Tolerant Learning of Sparse Parities and Related Problems Elena Grigorescu, Lev Reyzin, and Santosh Vempala School of Computer Science Georgia Institute of Technology 266 Ferst Drive, Atlanta

More information

On the power and the limits of evolvability. Vitaly Feldman Almaden Research Center

On the power and the limits of evolvability. Vitaly Feldman Almaden Research Center On the power and the limits of evolvability Vitaly Feldman Almaden Research Center Learning from examples vs evolvability Learnable from examples Evolvable Parity functions The core model: PAC [Valiant

More information

PAC Learning. prof. dr Arno Siebes. Algorithmic Data Analysis Group Department of Information and Computing Sciences Universiteit Utrecht

PAC Learning. prof. dr Arno Siebes. Algorithmic Data Analysis Group Department of Information and Computing Sciences Universiteit Utrecht PAC Learning prof. dr Arno Siebes Algorithmic Data Analysis Group Department of Information and Computing Sciences Universiteit Utrecht Recall: PAC Learning (Version 1) A hypothesis class H is PAC learnable

More information

CS340 Machine learning Lecture 4 Learning theory. Some slides are borrowed from Sebastian Thrun and Stuart Russell

CS340 Machine learning Lecture 4 Learning theory. Some slides are borrowed from Sebastian Thrun and Stuart Russell CS340 Machine learning Lecture 4 Learning theory Some slides are borrowed from Sebastian Thrun and Stuart Russell Announcement What: Workshop on applying for NSERC scholarships and for entry to graduate

More information

Learning and Fourier Analysis

Learning and Fourier Analysis Learning and Fourier Analysis Grigory Yaroslavtsev http://grigory.us Slides at http://grigory.us/cis625/lecture2.pdf CIS 625: Computational Learning Theory Fourier Analysis and Learning Powerful tool for

More information

Random Graphs. Research Statement Daniel Poole Autumn 2015

Random Graphs. Research Statement Daniel Poole Autumn 2015 I am interested in the interactions of probability and analysis with combinatorics and discrete structures. These interactions arise naturally when trying to find structure and information in large growing

More information

Foundations of Machine Learning and Data Science. Lecturer: Avrim Blum Lecture 9: October 7, 2015

Foundations of Machine Learning and Data Science. Lecturer: Avrim Blum Lecture 9: October 7, 2015 10-806 Foundations of Machine Learning and Data Science Lecturer: Avrim Blum Lecture 9: October 7, 2015 1 Computational Hardness of Learning Today we will talk about some computational hardness results

More information

Lecture 29: Computational Learning Theory

Lecture 29: Computational Learning Theory CS 710: Complexity Theory 5/4/2010 Lecture 29: Computational Learning Theory Instructor: Dieter van Melkebeek Scribe: Dmitri Svetlov and Jake Rosin Today we will provide a brief introduction to computational

More information

The Power of Random Counterexamples

The Power of Random Counterexamples Proceedings of Machine Learning Research 76:1 14, 2017 Algorithmic Learning Theory 2017 The Power of Random Counterexamples Dana Angluin DANA.ANGLUIN@YALE.EDU and Tyler Dohrn TYLER.DOHRN@YALE.EDU Department

More information

The Probabilistic Method

The Probabilistic Method The Probabilistic Method Janabel Xia and Tejas Gopalakrishna MIT PRIMES Reading Group, mentors Gwen McKinley and Jake Wellens December 7th, 2018 Janabel Xia and Tejas Gopalakrishna Probabilistic Method

More information

VC Dimension Review. The purpose of this document is to review VC dimension and PAC learning for infinite hypothesis spaces.

VC Dimension Review. The purpose of this document is to review VC dimension and PAC learning for infinite hypothesis spaces. VC Dimension Review The purpose of this document is to review VC dimension and PAC learning for infinite hypothesis spaces. Previously, in discussing PAC learning, we were trying to answer questions about

More information

Learning Functions of Halfspaces Using Prefix Covers

Learning Functions of Halfspaces Using Prefix Covers JMLR: Workshop and Conference Proceedings vol (2010) 15.1 15.10 25th Annual Conference on Learning Theory Learning Functions of Halfspaces Using Prefix Covers Parikshit Gopalan MSR Silicon Valley, Mountain

More information

Independence and chromatic number (and random k-sat): Sparse Case. Dimitris Achlioptas Microsoft

Independence and chromatic number (and random k-sat): Sparse Case. Dimitris Achlioptas Microsoft Independence and chromatic number (and random k-sat): Sparse Case Dimitris Achlioptas Microsoft Random graphs W.h.p.: with probability that tends to 1 as n. Hamiltonian cycle Let τ 2 be the moment all

More information

Lecture 5: January 30

Lecture 5: January 30 CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 5: January 30 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Two Decades of Property Testing

Two Decades of Property Testing Two Decades of Property Testing Madhu Sudan Microsoft Research 12/09/2014 Invariance in Property Testing @MIT 1 of 29 Kepler s Big Data Problem Tycho Brahe (~1550-1600): Wished to measure planetary motion

More information

Sample Complexity of Learning Independent of Set Theory

Sample Complexity of Learning Independent of Set Theory Sample Complexity of Learning Independent of Set Theory Shai Ben-David University of Waterloo, Canada Based on joint work with Pavel Hrubes, Shay Moran, Amir Shpilka and Amir Yehudayoff Simons workshop,

More information

A Tutorial on Computational Learning Theory Presented at Genetic Programming 1997 Stanford University, July 1997

A Tutorial on Computational Learning Theory Presented at Genetic Programming 1997 Stanford University, July 1997 A Tutorial on Computational Learning Theory Presented at Genetic Programming 1997 Stanford University, July 1997 Vasant Honavar Artificial Intelligence Research Laboratory Department of Computer Science

More information

Computational Learning Theory

Computational Learning Theory Computational Learning Theory Pardis Noorzad Department of Computer Engineering and IT Amirkabir University of Technology Ordibehesht 1390 Introduction For the analysis of data structures and algorithms

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning 236756 Prof. Nir Ailon Lecture 4: Computational Complexity of Learning & Surrogate Losses Efficient PAC Learning Until now we were mostly worried about sample complexity

More information

Social and Economic Networks Matthew O. Jackson

Social and Economic Networks Matthew O. Jackson Social and Economic Networks Matthew O. Jackson Copyright: Matthew O. Jackson 2016 Please do not post or distribute without permission. Networks and Behavior How does network structure impact behavior?

More information

Computational Learning Theory. CS534 - Machine Learning

Computational Learning Theory. CS534 - Machine Learning Computational Learning Theory CS534 Machine Learning Introduction Computational learning theory Provides a theoretical analysis of learning Shows when a learning algorithm can be expected to succeed Shows

More information

Computational Learning Theory (COLT)

Computational Learning Theory (COLT) Computational Learning Theory (COLT) Goals: Theoretical characterization of 1 Difficulty of machine learning problems Under what conditions is learning possible and impossible? 2 Capabilities of machine

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Computational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 6: Computational Complexity of Learning Proper vs Improper Learning Learning Using FIND-CONS For any family of hypothesis

More information

3 Finish learning monotone Boolean functions

3 Finish learning monotone Boolean functions COMS 6998-3: Sub-Linear Algorithms in Learning and Testing Lecturer: Rocco Servedio Lecture 5: 02/19/2014 Spring 2014 Scribes: Dimitris Paidarakis 1 Last time Finished KM algorithm; Applications of KM

More information

Statistical and Computational Phase Transitions in Planted Models

Statistical and Computational Phase Transitions in Planted Models Statistical and Computational Phase Transitions in Planted Models Jiaming Xu Joint work with Yudong Chen (UC Berkeley) Acknowledgement: Prof. Bruce Hajek November 4, 203 Cluster/Community structure in

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Computational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 6: Computational Complexity of Learning Proper vs Improper Learning Efficient PAC Learning Definition: A family H n of

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University October 11, 2012 Today: Computational Learning Theory Probably Approximately Coorrect (PAC) learning theorem

More information

Learning Theory. Machine Learning B Seyoung Kim. Many of these slides are derived from Tom Mitchell, Ziv- Bar Joseph. Thanks!

Learning Theory. Machine Learning B Seyoung Kim. Many of these slides are derived from Tom Mitchell, Ziv- Bar Joseph. Thanks! Learning Theory Machine Learning 10-601B Seyoung Kim Many of these slides are derived from Tom Mitchell, Ziv- Bar Joseph. Thanks! Computa2onal Learning Theory What general laws constrain inducgve learning?

More information

Complexity Classes IV

Complexity Classes IV Complexity Classes IV NP Optimization Problems and Probabilistically Checkable Proofs Eric Rachlin 1 Decision vs. Optimization Most complexity classes are defined in terms of Yes/No questions. In the case

More information

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018

U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 U.C. Berkeley Better-than-Worst-Case Analysis Handout 3 Luca Trevisan May 24, 2018 Lecture 3 In which we show how to find a planted clique in a random graph. 1 Finding a Planted Clique We will analyze

More information

Computational Learning Theory

Computational Learning Theory CS 446 Machine Learning Fall 2016 OCT 11, 2016 Computational Learning Theory Professor: Dan Roth Scribe: Ben Zhou, C. Cervantes 1 PAC Learning We want to develop a theory to relate the probability of successful

More information

Solving Random Satisfiable 3CNF Formulas in Expected Polynomial Time

Solving Random Satisfiable 3CNF Formulas in Expected Polynomial Time Solving Random Satisfiable 3CNF Formulas in Expected Polynomial Time Michael Krivelevich and Dan Vilenchik Tel-Aviv University Solving Random Satisfiable 3CNF Formulas in Expected Polynomial Time p. 1/2

More information

Lecture Learning infinite hypothesis class via VC-dimension and Rademacher complexity;

Lecture Learning infinite hypothesis class via VC-dimension and Rademacher complexity; CSCI699: Topics in Learning and Game Theory Lecture 2 Lecturer: Ilias Diakonikolas Scribes: Li Han Today we will cover the following 2 topics: 1. Learning infinite hypothesis class via VC-dimension and

More information

Introduction to Statistical Learning Theory

Introduction to Statistical Learning Theory Introduction to Statistical Learning Theory Definition Reminder: We are given m samples {(x i, y i )} m i=1 Dm and a hypothesis space H and we wish to return h H minimizing L D (h) = E[l(h(x), y)]. Problem

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Interactive Clustering of Linear Classes and Cryptographic Lower Bounds

Interactive Clustering of Linear Classes and Cryptographic Lower Bounds Interactive Clustering of Linear Classes and Cryptographic Lower Bounds Ádám D. Lelkes and Lev Reyzin Department of Mathematics, Statistics, and Computer Science University of Illinois at Chicago Chicago,

More information

Computational Lower Bounds for Statistical Estimation Problems

Computational Lower Bounds for Statistical Estimation Problems Computational Lower Bounds for Statistical Estimation Problems Ilias Diakonikolas (USC) (joint with Daniel Kane (UCSD) and Alistair Stewart (USC)) Workshop on Local Algorithms, MIT, June 2018 THIS TALK

More information

Computational Learning Theory

Computational Learning Theory 0. Computational Learning Theory Based on Machine Learning, T. Mitchell, McGRAW Hill, 1997, ch. 7 Acknowledgement: The present slides are an adaptation of slides drawn by T. Mitchell 1. Main Questions

More information

Planted Cliques, Iterative Thresholding and Message Passing Algorithms

Planted Cliques, Iterative Thresholding and Message Passing Algorithms Planted Cliques, Iterative Thresholding and Message Passing Algorithms Yash Deshpande and Andrea Montanari Stanford University November 5, 2013 Deshpande, Montanari Planted Cliques November 5, 2013 1 /

More information

Graphical Models. Lecture 10: Variable Elimina:on, con:nued. Andrew McCallum

Graphical Models. Lecture 10: Variable Elimina:on, con:nued. Andrew McCallum Graphical Models Lecture 10: Variable Elimina:on, con:nued Andrew McCallum mccallum@cs.umass.edu Thanks to Noah Smith and Carlos Guestrin for some slide materials. 1 Last Time Probabilis:c inference is

More information

CS264: Beyond Worst-Case Analysis Lectures #9 and 10: Spectral Algorithms for Planted Bisection and Planted Clique

CS264: Beyond Worst-Case Analysis Lectures #9 and 10: Spectral Algorithms for Planted Bisection and Planted Clique CS64: Beyond Worst-Case Analysis Lectures #9 and 10: Spectral Algorithms for Planted Bisection and Planted Clique Tim Roughgarden February 7 and 9, 017 1 Preamble Lectures #6 8 studied deterministic conditions

More information

PCP Theorem and Hardness of Approximation

PCP Theorem and Hardness of Approximation PCP Theorem and Hardness of Approximation An Introduction Lee Carraher and Ryan McGovern Department of Computer Science University of Cincinnati October 27, 2003 Introduction Assuming NP P, there are many

More information

More Efficient PAC-learning of DNF with Membership Queries. Under the Uniform Distribution

More Efficient PAC-learning of DNF with Membership Queries. Under the Uniform Distribution More Efficient PAC-learning of DNF with Membership Queries Under the Uniform Distribution Nader H. Bshouty Technion Jeffrey C. Jackson Duquesne University Christino Tamon Clarkson University Corresponding

More information

Sorting under partial information (without the ellipsoid algorithm)

Sorting under partial information (without the ellipsoid algorithm) Sorting under partial information (without the ellipsoid algorithm) Jean Cardinal ULB Samuel Fiorini ULB Gwenaël Joret ULB Raphaël Jungers UCL/MIT Ian Munro Waterloo Sorting by comparisons under partial

More information

New Algorithms for Contextual Bandits

New Algorithms for Contextual Bandits New Algorithms for Contextual Bandits Lev Reyzin Georgia Institute of Technology Work done at Yahoo! 1 S A. Beygelzimer, J. Langford, L. Li, L. Reyzin, R.E. Schapire Contextual Bandit Algorithms with Supervised

More information

A Theoretical and Empirical Study of a Noise-Tolerant Algorithm to Learn Geometric Patterns

A Theoretical and Empirical Study of a Noise-Tolerant Algorithm to Learn Geometric Patterns Machine Learning, 37, 5 49 (1999) c 1999 Kluwer Academic Publishers. Manufactured in The Netherlands. A Theoretical and Empirical Study of a Noise-Tolerant Algorithm to Learn Geometric Patterns SALLY A.

More information

CS 151 Complexity Theory Spring Solution Set 5

CS 151 Complexity Theory Spring Solution Set 5 CS 151 Complexity Theory Spring 2017 Solution Set 5 Posted: May 17 Chris Umans 1. We are given a Boolean circuit C on n variables x 1, x 2,..., x n with m, and gates. Our 3-CNF formula will have m auxiliary

More information

Permanent and Determinant

Permanent and Determinant Permanent and Determinant non-identical twins Avi Wigderson IAS, Princeton Meet the twins F field, char(f) 2. X M n (F) matrix of variables X ij Det n (X) = σ Sn sgn(σ) i [n] X iσ(i) Per n (X) = σ Sn i

More information

Random Lifts of Graphs

Random Lifts of Graphs 27th Brazilian Math Colloquium, July 09 Plan of this talk A brief introduction to the probabilistic method. A quick review of expander graphs and their spectrum. Lifts, random lifts and their properties.

More information

Lecture 7: Passive Learning

Lecture 7: Passive Learning CS 880: Advanced Complexity Theory 2/8/2008 Lecture 7: Passive Learning Instructor: Dieter van Melkebeek Scribe: Tom Watson In the previous lectures, we studied harmonic analysis as a tool for analyzing

More information

Computational Learning Theory. CS 486/686: Introduction to Artificial Intelligence Fall 2013

Computational Learning Theory. CS 486/686: Introduction to Artificial Intelligence Fall 2013 Computational Learning Theory CS 486/686: Introduction to Artificial Intelligence Fall 2013 1 Overview Introduction to Computational Learning Theory PAC Learning Theory Thanks to T Mitchell 2 Introduction

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University October 11, 2012 Today: Computational Learning Theory Probably Approximately Coorrect (PAC) learning theorem

More information

Lecture 7: February 6

Lecture 7: February 6 CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 7: February 6 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Essential facts about NP-completeness:

Essential facts about NP-completeness: CMPSCI611: NP Completeness Lecture 17 Essential facts about NP-completeness: Any NP-complete problem can be solved by a simple, but exponentially slow algorithm. We don t have polynomial-time solutions

More information

Why almost all k-colorable graphs are easy to color

Why almost all k-colorable graphs are easy to color Why almost all k-colorable graphs are easy to color Amin Coja-Oghlan Michael Krivelevich Dan Vilenchik May 13, 2007 Abstract Coloring a k-colorable graph using k colors (k 3 is a notoriously hard problem.

More information

CS340 Machine learning Lecture 5 Learning theory cont'd. Some slides are borrowed from Stuart Russell and Thorsten Joachims

CS340 Machine learning Lecture 5 Learning theory cont'd. Some slides are borrowed from Stuart Russell and Thorsten Joachims CS340 Machine learning Lecture 5 Learning theory cont'd Some slides are borrowed from Stuart Russell and Thorsten Joachims Inductive learning Simplest form: learn a function from examples f is the target

More information

CPSC 536N: Randomized Algorithms Term 2. Lecture 2

CPSC 536N: Randomized Algorithms Term 2. Lecture 2 CPSC 536N: Randomized Algorithms 2014-15 Term 2 Prof. Nick Harvey Lecture 2 University of British Columbia In this lecture we continue our introduction to randomized algorithms by discussing the Max Cut

More information

Lecture 5: Probabilistic tools and Applications II

Lecture 5: Probabilistic tools and Applications II T-79.7003: Graphs and Networks Fall 2013 Lecture 5: Probabilistic tools and Applications II Lecturer: Charalampos E. Tsourakakis Oct. 11, 2013 5.1 Overview In the first part of today s lecture we will

More information

Active Testing! Liu Yang! Joint work with Nina Balcan,! Eric Blais and Avrim Blum! Liu Yang 2011

Active Testing! Liu Yang! Joint work with Nina Balcan,! Eric Blais and Avrim Blum! Liu Yang 2011 ! Active Testing! Liu Yang! Joint work with Nina Balcan,! Eric Blais and Avrim Blum! 1 Property Testing Instance space X = R n (Distri D over X)! Tested function f : X->{0,1}! A property P of Boolean fn

More information

Lecture 6: Random Walks versus Independent Sampling

Lecture 6: Random Walks versus Independent Sampling Spectral Graph Theory and Applications WS 011/01 Lecture 6: Random Walks versus Independent Sampling Lecturer: Thomas Sauerwald & He Sun For many problems it is necessary to draw samples from some distribution

More information

Computational Learning Theory. Definitions

Computational Learning Theory. Definitions Computational Learning Theory Computational learning theory is interested in theoretical analyses of the following issues. What is needed to learn effectively? Sample complexity. How many examples? Computational

More information

Public key cryptography based on the clique and learning parity with noise problems for post-quantum cryptography

Public key cryptography based on the clique and learning parity with noise problems for post-quantum cryptography Public key cryptography based on the clique and learning parity with noise problems for post-quantum cryptography Péter Hudoba peter.hudoba@inf.elte.hu ELTE Eötvös Loránd University Faculty of Informatics

More information

Lecture 25 of 42. PAC Learning, VC Dimension, and Mistake Bounds

Lecture 25 of 42. PAC Learning, VC Dimension, and Mistake Bounds Lecture 25 of 42 PAC Learning, VC Dimension, and Mistake Bounds Thursday, 15 March 2007 William H. Hsu, KSU http://www.kddresearch.org/courses/spring2007/cis732 Readings: Sections 7.4.17.4.3, 7.5.17.5.3,

More information

FORMULATION OF THE LEARNING PROBLEM

FORMULATION OF THE LEARNING PROBLEM FORMULTION OF THE LERNING PROBLEM MIM RGINSKY Now that we have seen an informal statement of the learning problem, as well as acquired some technical tools in the form of concentration inequalities, we

More information

Computational Learning Theory

Computational Learning Theory Computational Learning Theory Sinh Hoa Nguyen, Hung Son Nguyen Polish-Japanese Institute of Information Technology Institute of Mathematics, Warsaw University February 14, 2006 inh Hoa Nguyen, Hung Son

More information

Machine Learning. Computational Learning Theory. Eric Xing , Fall Lecture 9, October 5, 2016

Machine Learning. Computational Learning Theory. Eric Xing , Fall Lecture 9, October 5, 2016 Machine Learning 10-701, Fall 2016 Computational Learning Theory Eric Xing Lecture 9, October 5, 2016 Reading: Chap. 7 T.M book Eric Xing @ CMU, 2006-2016 1 Generalizability of Learning In machine learning

More information

Learning Large-Alphabet and Analog Circuits with Value Injection Queries

Learning Large-Alphabet and Analog Circuits with Value Injection Queries Learning Large-Alphabet and Analog Circuits with Value Injection Queries Dana Angluin 1 James Aspnes 1, Jiang Chen 2, Lev Reyzin 1,, 1 Computer Science Department, Yale University {angluin,aspnes}@cs.yale.edu,

More information

CS369N: Beyond Worst-Case Analysis Lecture #4: Probabilistic and Semirandom Models for Clustering and Graph Partitioning

CS369N: Beyond Worst-Case Analysis Lecture #4: Probabilistic and Semirandom Models for Clustering and Graph Partitioning CS369N: Beyond Worst-Case Analysis Lecture #4: Probabilistic and Semirandom Models for Clustering and Graph Partitioning Tim Roughgarden April 25, 2010 1 Learning Mixtures of Gaussians This lecture continues

More information

CSE446: Linear Regression Regulariza5on Bias / Variance Tradeoff Winter 2015

CSE446: Linear Regression Regulariza5on Bias / Variance Tradeoff Winter 2015 CSE446: Linear Regression Regulariza5on Bias / Variance Tradeoff Winter 2015 Luke ZeElemoyer Slides adapted from Carlos Guestrin Predic5on of con5nuous variables Billionaire says: Wait, that s not what

More information

On Basing Lower-Bounds for Learning on Worst-Case Assumptions

On Basing Lower-Bounds for Learning on Worst-Case Assumptions On Basing Lower-Bounds for Learning on Worst-Case Assumptions Benny Applebaum Boaz Barak David Xiao Abstract We consider the question of whether P NP implies that there exists some concept class that is

More information

CS 6375: Machine Learning Computational Learning Theory

CS 6375: Machine Learning Computational Learning Theory CS 6375: Machine Learning Computational Learning Theory Vibhav Gogate The University of Texas at Dallas Many slides borrowed from Ray Mooney 1 Learning Theory Theoretical characterizations of Difficulty

More information

1 Randomized Computation

1 Randomized Computation CS 6743 Lecture 17 1 Fall 2007 1 Randomized Computation Why is randomness useful? Imagine you have a stack of bank notes, with very few counterfeit ones. You want to choose a genuine bank note to pay at

More information

Approximating MAX-E3LIN is NP-Hard

Approximating MAX-E3LIN is NP-Hard Approximating MAX-E3LIN is NP-Hard Evan Chen May 4, 2016 This lecture focuses on the MAX-E3LIN problem. We prove that approximating it is NP-hard by a reduction from LABEL-COVER. 1 Introducing MAX-E3LIN

More information

Statistical Algorithms and a Lower Bound for Detecting Planted Cliques

Statistical Algorithms and a Lower Bound for Detecting Planted Cliques Statistical Algorithms and a Lower Bound for Detecting Planted Cliques Vitaly Feldman Elena Grigorescu Lev Reyzin Santosh S. Vempala Ying Xiao Abstract We introduce a framework for proving lower bounds

More information

Online Learning, Mistake Bounds, Perceptron Algorithm

Online Learning, Mistake Bounds, Perceptron Algorithm Online Learning, Mistake Bounds, Perceptron Algorithm 1 Online Learning So far the focus of the course has been on batch learning, where algorithms are presented with a sample of training data, from which

More information

Learning Theory. Aar$ Singh and Barnabas Poczos. Machine Learning / Apr 17, Slides courtesy: Carlos Guestrin

Learning Theory. Aar$ Singh and Barnabas Poczos. Machine Learning / Apr 17, Slides courtesy: Carlos Guestrin Learning Theory Aar$ Singh and Barnabas Poczos Machine Learning 10-701/15-781 Apr 17, 2014 Slides courtesy: Carlos Guestrin Learning Theory We have explored many ways of learning from data But How good

More information

CS151 Complexity Theory. Lecture 6 April 19, 2017

CS151 Complexity Theory. Lecture 6 April 19, 2017 CS151 Complexity Theory Lecture 6 Shannon s counting argument frustrating fact: almost all functions require huge circuits Theorem (Shannon): With probability at least 1 o(1), a random function f:{0,1}

More information

Unconditional Lower Bounds for Learning Intersections of Halfspaces

Unconditional Lower Bounds for Learning Intersections of Halfspaces Unconditional Lower Bounds for Learning Intersections of Halfspaces Adam R. Klivans Alexander A. Sherstov The University of Texas at Austin Department of Computer Sciences Austin, TX 78712 USA {klivans,sherstov}@cs.utexas.edu

More information

CS151 Complexity Theory. Lecture 15 May 22, 2017

CS151 Complexity Theory. Lecture 15 May 22, 2017 CS151 Complexity Theory Lecture 15 New topic(s) Optimization problems, Approximation Algorithms, and Probabilistically Checkable Proofs Optimization Problems many hard problems (especially NP-hard) are

More information

Introduction to Computational Learning Theory

Introduction to Computational Learning Theory Introduction to Computational Learning Theory The classification problem Consistent Hypothesis Model Probably Approximately Correct (PAC) Learning c Hung Q. Ngo (SUNY at Buffalo) CSE 694 A Fun Course 1

More information

CS168: The Modern Algorithmic Toolbox Lecture #19: Expander Codes

CS168: The Modern Algorithmic Toolbox Lecture #19: Expander Codes CS168: The Modern Algorithmic Toolbox Lecture #19: Expander Codes Tim Roughgarden & Gregory Valiant June 1, 2016 In the first lecture of CS168, we talked about modern techniques in data storage (consistent

More information

Chapter 3. Vector spaces

Chapter 3. Vector spaces Chapter 3. Vector spaces Lecture notes for MA1111 P. Karageorgis pete@maths.tcd.ie 1/22 Linear combinations Suppose that v 1,v 2,...,v n and v are vectors in R m. Definition 3.1 Linear combination We say

More information

Algorithms, Graph Theory, and Linear Equa9ons in Laplacians. Daniel A. Spielman Yale University

Algorithms, Graph Theory, and Linear Equa9ons in Laplacians. Daniel A. Spielman Yale University Algorithms, Graph Theory, and Linear Equa9ons in Laplacians Daniel A. Spielman Yale University Shang- Hua Teng Carter Fussell TPS David Harbater UPenn Pavel Cur9s Todd Knoblock CTY Serge Lang 1927-2005

More information

Introduc)on to Ar)ficial Intelligence

Introduc)on to Ar)ficial Intelligence Introduc)on to Ar)ficial Intelligence Lecture 13 Approximate Inference CS/CNS/EE 154 Andreas Krause Bayesian networks! Compact representa)on of distribu)ons over large number of variables! (OQen) allows

More information

Near-domination in graphs

Near-domination in graphs Near-domination in graphs Bruce Reed Researcher, Projet COATI, INRIA and Laboratoire I3S, CNRS France, and Visiting Researcher, IMPA, Brazil Alex Scott Mathematical Institute, University of Oxford, Oxford

More information

10.1 The Formal Model

10.1 The Formal Model 67577 Intro. to Machine Learning Fall semester, 2008/9 Lecture 10: The Formal (PAC) Learning Model Lecturer: Amnon Shashua Scribe: Amnon Shashua 1 We have see so far algorithms that explicitly estimate

More information

Some Selected Topics. Spectral Norm in Learning Theory:

Some Selected Topics. Spectral Norm in Learning Theory: Spectral Norm in Learning Theory Slide 1 Spectral Norm in Learning Theory: Some Selected Topics Hans U. Simon (RUB) Email: simon@lmi.rub.de Homepage: http://www.ruhr-uni-bochum.de/lmi Spectral Norm in

More information

Adding random edges to create the square of a Hamilton cycle

Adding random edges to create the square of a Hamilton cycle Adding random edges to create the square of a Hamilton cycle Patrick Bennett Andrzej Dudek Alan Frieze October 7, 2017 Abstract We consider how many random edges need to be added to a graph of order n

More information

1 Last time and today

1 Last time and today COMS 6253: Advanced Computational Learning Spring 2012 Theory Lecture 12: April 12, 2012 Lecturer: Rocco Servedio 1 Last time and today Scribe: Dean Alderucci Previously: Started the BKW algorithm for

More information

Probabilistic Method. Benny Sudakov. Princeton University

Probabilistic Method. Benny Sudakov. Princeton University Probabilistic Method Benny Sudakov Princeton University Rough outline The basic Probabilistic method can be described as follows: In order to prove the existence of a combinatorial structure with certain

More information

Relating Data Compression and Learnability

Relating Data Compression and Learnability Relating Data Compression and Learnability Nick Littlestone, Manfred K. Warmuth Department of Computer and Information Sciences University of California at Santa Cruz June 10, 1986 Abstract We explore

More information

Lectures on Combinatorial Statistics. Gábor Lugosi

Lectures on Combinatorial Statistics. Gábor Lugosi Lectures on Combinatorial Statistics Gábor Lugosi July 12, 2017 Contents 1 The hidden clique problem 4 1.1 A hypothesis testing problem...................... 4 1.2 The clique number of the Erdős-Rényi

More information

NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS: The copyright law of the United States (title 17, U.S. Code) governs the making of photocopies or

NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS: The copyright law of the United States (title 17, U.S. Code) governs the making of photocopies or NOTICE WARNING CONCERNING COPYRIGHT RESTRICTIONS: The copyright law of the United States (title 17, U.S. Code) governs the making of photocopies or other reproductions of copyrighted material. Any copying

More information