Testing by Implicit Learning

Similar documents
Efficiently Testing Sparse GF(2) Polynomials

Testing for Concise Representations

3 Finish learning monotone Boolean functions

CS 395T Computational Learning Theory. Scribe: Mike Halcrow. x 4. x 2. x 6

Learning and Fourier Analysis

Learning and Fourier Analysis

Lecture 29: Computational Learning Theory

Foundations of Machine Learning and Data Science. Lecturer: Avrim Blum Lecture 9: October 7, 2015

Lecture 10: Learning DNF, AC 0, Juntas. 1 Learning DNF in Almost Polynomial Time

Agnostic Learning of Disjunctions on Symmetric Distributions

Learning and Testing Submodular Functions

Learning Juntas. Elchanan Mossel CS and Statistics, U.C. Berkeley Berkeley, CA. Rocco A. Servedio MIT Department of.

Computational Learning Theory

ICML '97 and AAAI '97 Tutorials

Introduction to Computational Learning Theory

1 Last time and today

On the power and the limits of evolvability. Vitaly Feldman Almaden Research Center

Proclaiming Dictators and Juntas or Testing Boolean Formulae

Web-Mining Agents Computational Learning Theory

Computational Learning Theory

Simple Learning Algorithms for. Decision Trees and Multivariate Polynomials. xed constant bounded product distribution. is depth 3 formulas.

Lecture 5: Efficient PAC Learning. 1 Consistent Learning: a Bound on Sample Complexity

8.1 Polynomial Threshold Functions

Lecture 4: LMN Learning (Part 2)

6.842 Randomness and Computation April 2, Lecture 14

Lecture 7: Passive Learning

COS597D: Information Theory in Computer Science October 5, Lecture 6

Testing Linear-Invariant Function Isomorphism

On Noise-Tolerant Learning of Sparse Parities and Related Problems

CS 6375: Machine Learning Computational Learning Theory

New Results for Random Walk Learning

Lecture 25 of 42. PAC Learning, VC Dimension, and Mistake Bounds

compare to comparison and pointer based sorting, binary trees

Settling the Query Complexity of Non-Adaptive Junta Testing

Property Testing: A Learning Theory Perspective

Introduction to Machine Learning

Hardness Results for Agnostically Learning Low-Degree Polynomial Threshold Functions

Learning DNF Expressions from Fourier Spectrum

Learning Large-Alphabet and Analog Circuits with Value Injection Queries

CSC 2429 Approaches to the P vs. NP Question and Related Complexity Questions Lecture 2: Switching Lemma, AC 0 Circuit Lower Bounds

Computational Learning Theory

A Tutorial on Computational Learning Theory Presented at Genetic Programming 1997 Stanford University, July 1997

TTIC An Introduction to the Theory of Machine Learning. Learning from noisy data, intro to SQ model

Boolean Algebra. Philipp Koehn. 9 September 2016

Quantum boolean functions

Notes for Lecture 25

Computational Learning Theory - Hilary Term : Introduction to the PAC Learning Framework

Notes for Lecture 11

Computational Learning Theory (COLT)

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Intro to Learning Theory Date: 12/8/16

Computational Learning Theory. Definitions

Testing Halfspaces. Ryan O Donnell Carnegie Mellon University. Rocco A. Servedio Columbia University.

Comp487/587 - Boolean Formulas

Testing Properties of Boolean Functions

Polynomial time Prediction Strategy with almost Optimal Mistake Probability

Addition is exponentially harder than counting for shallow monotone circuits

arxiv: v1 [cs.cc] 29 Feb 2012

Computational Learning Theory: Shattering and VC Dimensions. Machine Learning. Spring The slides are mainly from Vivek Srikumar

Yale University Department of Computer Science

Testing Computability by Width Two OBDDs

Machine Learning. Computational Learning Theory. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Guest Lecture: Average Case depth hierarchy for,, -Boolean circuits. Boaz Barak

What Can We Learn Privately?

Polynomial Certificates for Propositional Classes

Machine Learning. Computational Learning Theory. Eric Xing , Fall Lecture 9, October 5, 2016

Nearly Tight Bounds for Testing Function Isomorphism

Testing Problems with Sub-Learning Sample Complexity

1 Circuit Complexity. CS 6743 Lecture 15 1 Fall Definitions

Two Decades of Property Testing

Upper Bounds on Fourier Entropy

CS6840: Advanced Complexity Theory Mar 29, Lecturer: Jayalal Sarma M.N. Scribe: Dinesh K.

Lecture 1: 01/22/2014

Invariance in Property Testing

Read-Once Polynomial Identity Testing

Computational and Statistical Learning Theory

Optimal Hardness Results for Maximizing Agreements with Monomials

arxiv: v1 [cs.cc] 20 Apr 2017

Testing Monotone High-Dimensional Distributions

A Noisy-Influence Regularity Lemma for Boolean Functions Chris Jones

Computational learning theory. PAC learning. VC dimension.

On the Fourier spectrum of symmetric Boolean functions

On Exact Learning from Random Walk

5.1 Learning using Polynomial Threshold Functions

Lecture 5: February 21, 2017

Computational and Statistical Learning Theory

Computational and Statistical Learning theory

Computational Lower Bounds for Statistical Estimation Problems

Math 262A Lecture Notes - Nechiporuk s Theorem

Lecture 13: 04/23/2014

3.1 Decision Trees Are Less Expressive Than DNFs

Lecture 10. Sublinear Time Algorithms (contd) CSC2420 Allan Borodin & Nisarg Shah 1

CS1800: Mathematical Induction. Professor Kevin Gold

CS151 Complexity Theory. Lecture 6 April 19, 2017

Boosting and Hard-Core Sets

Tight Bounds on Proper Equivalence Query Learning of DNF

Classes of Boolean Functions

Computational Learning Theory: Probably Approximately Correct (PAC) Learning. Machine Learning. Spring The slides are mainly from Vivek Srikumar

Testing Affine-Invariant Properties

Online Learning, Mistake Bounds, Perceptron Algorithm

Computational Learning Theory. CS 486/686: Introduction to Artificial Intelligence Fall 2013

Transcription:

Testing by Implicit Learning Rocco Servedio Columbia University ITCS Property Testing Workshop Beijing January 2010

What this talk is about 1. Testing by Implicit Learning: method for testing classes of Boolean functions Combines learning theory ingredients with junta testing [FKRSS04] Works for classes of functions that are well approximated by juntas Gives new query-efficient testing results for many classes 2. Extension of basic method: computationally efficient testing for sparse GF(2) polynomials 3. Different extension of basic method: query-efficient testing for many classes with low-dimensional Fourier spectrum

Basic framework for whole talk Tester makes black-box queries to arbitrary oracle for Property of Boolean functions class that have the property of all functions Examples: etc. linear functions over GF(2) (parities) [BLR93] degree- GF(2) polynomials [AKKSR03] halfspaces [MORS09] monotone functions [DGLRRS99, GGLRS00] -term DNF formulas [DLMORSW07]

Basic framework continued Tester must output any output OK yes whp if no whp if is -far from every tester must output yes Tester must output no Here & throughout talk, measure distance w.r.t. uniform distribution over domain Main concern: information-theoretic # of queries required (but computationally efficient algorithms are nice too )

1. Testing by Implicit Learning Ilias Diakonikolas Homin Lee Kevin Matulef Krzysztof Onak Ronitt Rubinfeld Andrew Wan Testing for Concise Representations

Learning a concept class PAC learning concept class under the uniform distribution Setup: Learner is given a sample of labeled examples Target function is unknown to learner Each example in sample is independent, uniform over Goal: For every, with probability learner should output a hypothesis such that

Proper Learning via Occam s Razor A learning algorithm for is proper if it outputs hypotheses from. Generic proper learning algorithm for any finite class : Draw labeled examples Output any that is consistent with all examples. Why it works: Suppose true error rate of is Then Pr[ consistent with random examples] error finding such an may be computationally hard So Pr[any bad is output] is at most

Testing via proper learning [GGR98]: properly learnable testable with same # queries. Run learning algorithm to learn to error ; hypothesis obtained is Draw random examples, use them to check that Why it works: is -accurate estimated error of is is -far from estimated error of is since is -far from

It s not that easy [GGR98]: properly learnable testable with same # queries. Occam s Razor: Every is properly learnable Great! But... Occam s Razor proper learner uses many examples: Even for very simple classes of functions over variables (like literals/dictators), any learning algorithm must use examples and in testing, we want query complexity independent of

Some known property testing results Class of functions over # of queries parity functions [BLR93] deg- polynomials [AKK+03] literals [PRS02] conjunctions [PRS02] -juntas [FKRSS04, B08, B09] -term monotone DNF [PRS02] halfspaces [MORS09] Question: [PRS02] what about non-monotone -term DNF?

Testing via Implicit Learning results Theorem: [DLMORSW07] The class of s-term DNF over is testable with poly(s/ ) queries. s-leaf decision trees size-s branching programs size-s Boolean formulas (AND/OR/NOT gates) size-s Boolean circuits (AND/OR/NOT gates) s-sparse polynomials over GF(2) s-sparse algebraic circuits over GF(2) s-sparse algebraic computation trees over GF(2) All results follow from basic testing by implicit learning approach.

When it works: Testing by Implicit Learning Approach works for any class that is well-approximated by juntas: for every there exists an such that is close to depends on few variables ( -close depends on vars) How it works: Running example: testing whether is an -term DNF versus -far from every -term DNF

-term DNF are well-approximated by juntas Let be any -term DNF formula: There is an -approximating DNF with terms where - each term contains variables, and hence - is a -junta Any term with variables is satisfied with probability Delete all (at most ) such terms from to get

Occam + approximation + [GGR98]? Take : makes so close to that if we only use uniform random examples, we can pretend Given any -term DNF, there is a -approximating DNF with terms where each term contains variables. So can try to learn = {all -term -DNF over } Now Occam requires examples better, but still depends on

Getting rid of? Each approximating DNF depends only on variables. Suppose we knew those variables. Then we d have = {all -term -DNF over so Occam would need only independent of! examples, But, can t explicitly identify even one variable with examples...

The fix: implicit learning High-level idea: Learn the structure of without explicitly identifying the relevant variables Algorithm tries to find an approximator where is an unknown mapping.

Implicit learning How can we learn structure of without knowing relevant variables? Need to generate examples of : many correctly labeled random the -term -DNF approximator for each string is bits Then can do Occam (brute-force search for consistent DNF).

Implicit learning cont Vars of are the variables that have high influence in : flipping the bit is likely to change value of setting of other variables almost always doesn t matter bits Given random -bit labeled example, want to construct -bit example 1 0 0 1 1 0 1 1 1 0 1 1 0 0 0 1 1 0 1 0 1 1 0 0 1 1 0 1 1 10 1 0 1 1 1 0 1 0 1 1 1 1 0 0 0 Do this using techniques of [FKRSS02] Testing Juntas

Use independence test of [FKRSS02] Let be a subset of variables. Independence test [FKRSS02]: Fix a random assignment to variables not in 1 0 0 1 1 0 1 1 1 0 0 1 1 0 0 0 0 1 0 1 0 1 1 0 0 1 0 0 1 0 1 1 Draw two independent settings of variables in, query on these 2 points 1 0 0 1 1 0 1 1 1 0 0 1 1 0 0 0 1 0 0 1 0 1 0 0 1 0 0 1 0 1 0 1 1 0 0 1 0 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 0 0 1 1 0 0 1 1 0 1 0 1 1 1 1 0 0 0 1 0 1 0 1 1 0 0 1 0 0 1 0 1 1 0 Intuition: if has all low-influence variables, see same value for whp if has a high-influence variable, see different value sometimes

Constructing our examples Given random -bit labeled example, want to construct -bit example 1 0 0 1 1 0 1 1 1 0 1 1 0 0 0 1 1 0 1 0 1 1 0 0 1 1 0 1 1 10 1 0 1 1 1 0 1 0 1 1 1 1 0 0 0 Follow [FKRSS02]: Randomly partition variables into blocks; run independence test on each block????????? Can determine which blocks have high-influence variables Each block should have at most one high-influence variable (birthday paradox)

Constructing our examples Given random -bit labeled example, want to construct -bit example 1 0 0 1 1 0 1 1 1 0 1 1 0 0 0 1 1 0 1 0 1 1 0 0 1 1 0 1 1 10 1 0 1 1 1 0 1 0 1 1 1 1 0 0 0 We know which blocks have high-influence variables; need to determine how the high-influence variable in the block is set. Consider a fixed high-influence block String partitions into : bits set to 0 in Run independence test on each of high-influence variable.?? bits set to 1 in to see which one has the Repeat for all high-influence blocks to get all bits of

The test and its completeness Suppose is an -term DNF. Then is close to -term -DNF Test constructs sample of random -bit examples that are all correctly labeled according to whp. Test checks all -term -DNFs over for consistency with sample. Outputs yes if any consistent DNF found, otherwise outputs no. is consistent, so test outputs yes

Sketch of soundness of test Suppose is far from every -term DNF If far from every -junta, [FKRSS02] catches it (too many high-influence blocks) So suppose close to an -junta and algorithm constructs sample of -bit examples labeled by. Then whp there exists no -term -DNF consistent with sample, so test outputs no If there were such a DNF consistent with sample, would have Occam close to by assumption close to so close to -- contradiction END OF SKETCH

Testing by Implicit Learning Can use TbIL approach for any class with the following property: such that is an -approximator for depends on few variables Many classes have this property All these classes are testable with poly( s-term DNF size-s Boolean formulas (AND/OR/NOT gates) size-s Boolean circuits (AND/OR/NOT gates) s-sparse polynomials over GF(2) ( of ANDs) s-leaf decision trees size-s branching programs s-sparse algebraic circuits over GF(2) s-sparse algebraic computation trees over GF(2) ) queries.

2. Extension of basic method: computationally efficient testing for s-sparse GF(2) polynomials Ilias Diakonikolas Homin Lee Kevin Matulef Andrew Wan Efficiently Testing Sparse GF(2) Polynomials

Pros and cons of basic Testing by Implicit Learning Approach works for many classes of functions One-size-fits-all algorithm Computationally inefficient: - Running time is - Running time bottleneck: brute-force search as proper learning algorithm (try all -term DNFs over variables) Can we use smarter learning algorithms to get (computationally) more efficient testers? Yes (in at least one case) but some complications ensue.

GF(2) Polynomials GF (2) polynomial p : {0,1} n {0,1} parity (sum) of monotone conjunctions (monomials) e.g. p(x) = 1 + x 1 x 3 + x 2 x 3 + x 1 x 4 x 5 x 6 x 8 + x 2 x 7 x 8 x 9 x 10 sparsity = number of monomials Polynomial is s-sparse if it has at most s monomials class of s-sparse GF(2) polynomials over {0,1} n Well-studied from various perspectives: [BS90, FS92, SS96, B97, BM02] (learning) [K89, GKS90, RB9, EK89, KL93, LVW93] (approximation, approximate counting)

Main Result Theorem : There is a testing algorithm for the class GF(2) polynomials} that uses and runs in time {all s-sparse queries (Matches lower bound on # queries required [DLMORSW07]) Ingredients for main result: Testing by Implicit Learning framework [DLMORSW07] Efficient Proper Learning Algorithm [Schapire-Sellie 96] New Structural Theorem: s-sparse polynomials simplify nicely under certain - carefully chosen - random restrictions

Efficient Proper Learning of s-sparse GF(2) Polynomials Theorem [SS 96]: There is a uniform distribution query algorithm that properly PAC learns s-sparse polynomials over to accuracy in time (and query complexity) Great! But Learning Algorithm uses black-box (membership) queries. Black-box queries to (an -junta which approximates ) are not so easy to simulate as uniform random examples

Random Examples vs Queries Let f: {0,1} n {0,1} be a sparse polynomial and f ' be some -approximator to f. Assume 1/ >> number of uniform random examples required for Occam learning f '. Then, random examples for f are ok to use as random examples for f '. But a black-box query algorithm may make few queries, yet cluster those queries on the few inputs where f and f ' disagree. No longer good enough for f ' to be a high-accuracy approximator of f.

The challenge Let f: {0,1} n {0,1} be an s-sparse polynomial. (presumably not a junta) Need to simulate black-box queries to a -approximator f ' which is an s-sparse polynomial and an -junta. (Must get answer of f ' right on every input!) To make this work, need to define the approximating function f ' carefully. Roughly speaking, f ' is obtained as follows: 1. Randomly partition variables in r = poly (s / ) subsets. 2. f ' = restriction obtained from f by setting all variables in low influence subsets to 0. Intuition: Kill all long monomials.

Illustration (I) Suppose p (x) = 1 + x 1 x 3 + x 2 x 3 + x 1 x 4 x 5 x 6 x 8 + x 2 x 7 x 8 x 9 x 10 and r = 5.

Illustration (II) Suppose p (x) = 1 + x 1 x 3 + x 2 x 3 + x 1 x 4 x 5 x 6 x 8 + x 2 x 7 x 8 x 9 x 10 and r = 5. Randomly partition the variables into r subsets:

Illustration (III) Suppose p (x) = 1 + x 1 x 3 + x 2 x 3 + x 1 x 4 x 5 x 6 x 8 + x 2 x 7 x 8 x 9 x 10 and r = 5. Check influence of each subset (independence test): green subsets: low influence red subsets: high influence

Illustration (IV) Suppose p (x) = 1 + x 1 x 3 + x 2 x 3 + x 1 x 4 x 5 x 6 x 8 + x 2 x 7 x 8 x 9 x 10 and r = 5. Zero out variables in low-influence subsets: p' (x) = p (x 1, x 2, x 3, x 4, x 5, 0, x 7, 0, 0, 0) = 1 + x 1 x 3 + x 2 x 3

Testing Algorithm for s-sparse GF(2) Polynomials 1. Partition the coordinates into [n] into r = poly (s / ) random subsets. 2. Distinguish subsets that contain a high-influence variable from subsets that do not. 3. Consider restriction f ' obtained from f by zeroing out all the variables in low-influence subsets. 4. Implicitly run [SS 96] using a simulated black-box oracle for the function f '. Do implicit learning, construct query strings similar to [DLMORSW07]

Why it Works - Structural Theorem Theorem: Let p: {0,1} n {0,1} be an s-sparse polynomial. Consider a random partition of the set [n] into r = poly (s / ) many subsets. Let p' be the restriction obtained by fixing all variables in low-influence subsets to 0. Then, whp the following are true: 1. p' has at most one of its relevant variables in each surviving subset; 2. p' consists of the monomials in p that consist entirely of highinfluence variables; 3. p' is an slog(s/ )-junta that is -close to p. Enables us to simulate black-box queries & thus implicitly run [SS96] learning algorithm.

3. Different extension: testing classes of functions with low Fourier dimension Parikshit Gopalan Ryan O Donnell Amir Shpilka Karl Wimmer (L) Testing Fourier Dimensionality and Sparsity

Fourier dimension The class {all Boolean functions with Fourier dimension k} is {all k-juntas-of-parities }, i.e. functions of the form each an arbitrary parity over {0,1} n an arbitrary k-variable function

Example of Fourier dimension view inputs as monomials has Fourier dimension at most So k-sparse PTFs have Fourier dimension k.

Third main result: Testing subclasses of k-dimensional functions min s.t. is a -junta of parities Let be a subclass of all -variable Boolean functions The subclass of -dimensional functions induced by is the class i.e. of functions over Ex: all linear threshold functions over variables all -sparse polynomial threshold functions over

Main result: Testing subclasses of k-dimensional functions Let be a subclass of all -variable Boolean functions The subclass of -dimensional functions induced by is the class Theorem: Let be any induced subclass of -dimensional functions. There is a nonadaptive -query algorithm for testing. So can test, e.g., C = k-sparse PTFs over {-1,1} n (take C = LTFs) C = size-k decision trees with parity nodes (take C = decision trees of size k) etc

Very sketchy sketch Parities play the role of influential variables in basic TbIL approach. Can t explicitly identify them, but can determine the value they take in any given example. Construct an implicit truth table : 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1 0 1 1 1 0 1 1 1 1 1 1 Check consistency with some (So learning is trivial get whole truth table.)

Summary 1. Testing by Implicit Learning: method for testing classes of Boolean functions Combines learning theory ingredients with junta testing [FKRSS04] Works for classes of functions that are well approximated by juntas Gives new query-efficient testing results for many classes 2. Extension of basic method: computationally efficient testing for sparse GF(2) polynomials Uses sophisticated black-box-query algorithm from learning theory [SS96] Careful construction of junta approximator 3. Different extension of basic method: query-efficient testing for many classes with low-dimensional Fourier spectrum Parities play role of variables in junta Really brute-force learning (build whole truth table!)

1. Future Work: Testing by Implicit Learning What are the right lower bounds for testing classes like -term DNF, size- decision trees, formulas, circuits? Can get following [CG04], but feels like right bound is? Any way to extend TbIL to distribution-independent testing framework? Obvious problem: may be hard to approximate by juntas

2. Future Work: Computationally Efficient TbIL for s-sparse GF(2) polynomials Get runtime for more classes! Computationally efficient proper learning algorithms would yield these, but these seem hard to come by First step: extend GF(2) results to any finite field. Want runtime to be Main bottleneck: need fast proper learning algorithms that only evaluate the poly being learned over inputs from [B97] requires runtime to learn -sparse polynomials over

3. Future Work: Testing subclasses of k-dimensional functions Any way to do it with real learning (implicitly generate just a sample rather than whole truth table)? Might lead to poly(k) query bounds in some cases, rather than current 2 k bound

Thank You

Very sketchy sketch (1) Extends Fourier sparsity test with Testing by Implicit Learning ideas. Fourier sparsity test: poly(s/e)-query test for C = s-sparse functions. Works by hashing Fourier coefficients of f into random cosets: If f is s-sparse and we hash into O(s 2 ) buckets, w.h.p. every coset gets at most one nonzero Fourier coefficient (birthday paradox). So can isolate all the parities