Quantum Recommendation Systems

Size: px
Start display at page:

Download "Quantum Recommendation Systems"

Transcription

1 Quantum Recommendation Systems Iordanis Kerenidis 1 Anupam Prakash 2 1 CNRS, Université Paris Diderot, Paris, France, EU. 2 Nanyang Technological University, Singapore. April 4, 2017

2 The HHL algorithm Utilize intrinsic linear algebra capabilities of quantum computers for exponential speedups.

3 The HHL algorithm Utilize intrinsic linear algebra capabilities of quantum computers for exponential speedups. Vector state x = i x i i where x R n is a unit vector.

4 The HHL algorithm Utilize intrinsic linear algebra capabilities of quantum computers for exponential speedups. Vector state x = i x i i where x R n is a unit vector. Given sparse matrix A R n n and b there is a quantum algorithm to prepare A 1 b in time polylog(n). [Harrow, Hassidim, Lloyd]

5 The HHL algorithm Utilize intrinsic linear algebra capabilities of quantum computers for exponential speedups. Vector state x = i x i i where x R n is a unit vector. Given sparse matrix A R n n and b there is a quantum algorithm to prepare A 1 b in time polylog(n). [Harrow, Hassidim, Lloyd] Assumptions: b can be prepared polylog(n) time and A is polylog(n) sparse.

6 The HHL algorithm Utilize intrinsic linear algebra capabilities of quantum computers for exponential speedups. Vector state x = i x i i where x R n is a unit vector. Given sparse matrix A R n n and b there is a quantum algorithm to prepare A 1 b in time polylog(n). [Harrow, Hassidim, Lloyd] Assumptions: b can be prepared polylog(n) time and A is polylog(n) sparse. Incomparable to classical linear system solver which returns vector x R n as opposed to x.

7 Quantum Machine Learning HHL led to several proposals for quantum machine learning algorithms.

8 Quantum Machine Learning HHL led to several proposals for quantum machine learning algorithms. Principal components analysis, classification with l 2 -SVMs, k-means clustering, perceptron, nearest neighbors... [Lloyd, Mohseni, Rebentrost, Wiebe, Kapoor, Svore]

9 Quantum Machine Learning HHL led to several proposals for quantum machine learning algorithms. Principal components analysis, classification with l 2 -SVMs, k-means clustering, perceptron, nearest neighbors... [Lloyd, Mohseni, Rebentrost, Wiebe, Kapoor, Svore] Algorithms achieve exponential speedups only for sparse/well-conditioned data.

10 Quantum Machine Learning HHL led to several proposals for quantum machine learning algorithms. Principal components analysis, classification with l 2 -SVMs, k-means clustering, perceptron, nearest neighbors... [Lloyd, Mohseni, Rebentrost, Wiebe, Kapoor, Svore] Algorithms achieve exponential speedups only for sparse/well-conditioned data. Sometimes a variant of the classical problem is solved: l 1 vs l 2 -SVM.

11 Quantum Machine Learning HHL led to several proposals for quantum machine learning algorithms. Principal components analysis, classification with l 2 -SVMs, k-means clustering, perceptron, nearest neighbors... [Lloyd, Mohseni, Rebentrost, Wiebe, Kapoor, Svore] Algorithms achieve exponential speedups only for sparse/well-conditioned data. Sometimes a variant of the classical problem is solved: l 1 vs l 2 -SVM. Incomparable with classical.

12 Quantum Recommendation Systems Open problem: A quantum machine learning algorithm with exponential worst case speedup for classical problem.

13 Quantum Recommendation Systems Open problem: A quantum machine learning algorithm with exponential worst case speedup for classical problem. Quantum recommendation systems.

14 Quantum Recommendation Systems Open problem: A quantum machine learning algorithm with exponential worst case speedup for classical problem. Quantum recommendation systems. An exponential speedup over classical with similar assumptions and guarantees.

15 Quantum Recommendation Systems Open problem: A quantum machine learning algorithm with exponential worst case speedup for classical problem. Quantum recommendation systems. An exponential speedup over classical with similar assumptions and guarantees. An end to end application with no assumptions on the data set.

16 Quantum Recommendation Systems Open problem: A quantum machine learning algorithm with exponential worst case speedup for classical problem. Quantum recommendation systems. An exponential speedup over classical with similar assumptions and guarantees. An end to end application with no assumptions on the data set. Solves the same problem as a classical recommendation system.

17 The Recommendation Problem The preference matrix P. P 1 P 2 P 3 P 4 P n 1 P n U 1 U 2 U 3. U m.1.4???.9.2?.6?.85???.8.9?.2?.75???.2

18 The Recommendation Problem The preference matrix P. P 1 P 2 P 3 P 4 P n 1 P n U 1 U 2 U 3. U m.1.4???.9.2?.6?.85???.8.9?.2?.75???.2 P ij is the value of item j for user i. Samples from P arrive in an online manner.

19 The Recommendation Problem The preference matrix P. P 1 P 2 P 3 P 4 P n 1 P n U 1 U 2 U 3. U m.1.4???.9.2?.6?.85???.8.9?.2?.75???.2 P ij is the value of item j for user i. Samples from P arrive in an online manner. The assumption that P has a good rank-k approximation for small k is widely used.

20 The Netflix problem

21 Reconstruction vs sampling Matrix reconstruction algorithms reconstruct P P using the low rank assumption and require time poly(mn).

22 Reconstruction vs sampling Matrix reconstruction algorithms reconstruct P P using the low rank assumption and require time poly(mn). A reconstruction based recommendation system requires time poly(n), even with pre-computation.

23 Reconstruction vs sampling Matrix reconstruction algorithms reconstruct P P using the low rank assumption and require time poly(mn). A reconstruction based recommendation system requires time poly(n), even with pre-computation. Matrix sampling suffices to obtain good recommendations.

24 Reconstruction vs sampling Matrix reconstruction algorithms reconstruct P P using the low rank assumption and require time poly(mn). A reconstruction based recommendation system requires time poly(n), even with pre-computation. Matrix sampling suffices to obtain good recommendations. Quantum algorithms can perform matrix sampling.

25 Reconstruction vs sampling Matrix reconstruction algorithms reconstruct P P using the low rank assumption and require time poly(mn). A reconstruction based recommendation system requires time poly(n), even with pre-computation. Matrix sampling suffices to obtain good recommendations. Quantum algorithms can perform matrix sampling. Theorem There is a quantum recommendation algorithm with running time O(poly(k)polylog(mn)).

26 Computational Model Samples from P arrive in an online manner and are stored in data structure with update time O(log 2 mn).

27 Computational Model Samples from P arrive in an online manner and are stored in data structure with update time O(log 2 mn). The quantum algorithm has oracle access to binary tree data structure storing additional metadata

28 Computational Model Samples from P arrive in an online manner and are stored in data structure with update time O(log 2 mn). The quantum algorithm has oracle access to binary tree data structure storing additional metadata We use the standard memory model used for algorithms like Grover search.

29 Computational Model Samples from P arrive in an online manner and are stored in data structure with update time O(log 2 mn). The quantum algorithm has oracle access to binary tree data structure storing additional metadata We use the standard memory model used for algorithms like Grover search. Users arrive into system in an online manner and system provides recommendations in time poly(k)polylog(mn).

30 Singular value estimation The singular value decomposition for matrix A is written as A = i σ iu i v t i.

31 Singular value estimation The singular value decomposition for matrix A is written as A = i σ iu i vi t. The rank-k approximation A k = i [k] σ iu i vi t minimizes A A k F.

32 Singular value estimation The singular value decomposition for matrix A is written as A = i σ iu i vi t. The rank-k approximation A k = i [k] σ iu i vi t minimizes A A k F. Quantum singular value estimation:

33 Singular value estimation The singular value decomposition for matrix A is written as A = i σ iu i vi t. The rank-k approximation A k = i [k] σ iu i vi t minimizes A A k F. Quantum singular value estimation: Theorem There is an algorithm with running time O(polylog(mn)/ɛ) that transforms i α i v i i α i v i σ i where σ i σ i ± ɛ A F with probability at least 1 1/poly(n).

34 Matrix Sampling Let T be a 0/1 matrix such that T ij = 1 if item j is good recommendation for user i. P 1 P 2 P 3 P 4 P n 1 P n U 1 U 2 U 3. U m 0 0??? 1 0? 0? 1??? 1 1? 0? 1??? 0

35 Matrix Sampling Let T be a 0/1 matrix such that T ij = 1 if item j is good recommendation for user i. P 1 P 2 P 3 P 4 P n 1 P n U 1 U 2 U 3. U m 0 0??? 1 0? 0? 1??? 1 1? 0? 1??? 0 Set the?s to 0 and rescale to obtain a subsample matrix T.

36 Matrix Sampling P Rounding T Uniform subsample T Low rank assumption SVD, QSVE T k [Achlioptas, McSherry] extended T k Figure: Matrix sampling based recommendation system.

37 Matrix Sampling T is the binary recommendation matrix obtained by rounding P.

38 Matrix Sampling T is the binary recommendation matrix obtained by rounding P. T is a uniform subsample of T : { A ij /p [with probability p] Â ij = 0 [otherwise]

39 Matrix Sampling T is the binary recommendation matrix obtained by rounding P. T is a uniform subsample of T : { A ij /p [with probability p] Â ij = 0 [otherwise] T k and T k are rank-k approximations for T and T.

40 Matrix Sampling T is the binary recommendation matrix obtained by rounding P. T is a uniform subsample of T : { A ij /p [with probability p] Â ij = 0 [otherwise] T k and T k are rank-k approximations for T and T. The low rank assumption implies that T T k ɛ T F for small k.

41 Matrix Sampling T is the binary recommendation matrix obtained by rounding P. T is a uniform subsample of T : { A ij /p [with probability p] Â ij = 0 [otherwise] T k and T k are rank-k approximations for T and T. The low rank assumption implies that T T k ɛ T F for small k. Analysis: Sampling from matrix close to T k yields good recommendations.

42 Analysis Samples from T k are good recommendations, for large fraction of typical users.

43 Analysis Samples from T k are good recommendations, for large fraction of typical users. Sampling from T k suffices.

44 Analysis Samples from T k are good recommendations, for large fraction of typical users. Sampling from T k suffices. Theorem (AM02) If  is obtained from a 0/1 matrix A by subsampling with probability p = 16n/η A 2 F then with probability at least 1 exp( 19(log n) 4 ), for all k, A Âk F A A k F + 3 ηk 1/4 A F

45 Analysis The quantum algorithm samples from T σ,κ, a projection onto all singular values σ and some in the range [(1 κ)σ, σ).

46 Analysis The quantum algorithm samples from T σ,κ, a projection onto all singular values σ and some in the range [(1 κ)σ, σ). We extend AM02 to this setting showing that: T T σ,κ F 9ɛ T F

47 Analysis The quantum algorithm samples from T σ,κ, a projection onto all singular values σ and some in the range [(1 κ)σ, σ). We extend AM02 to this setting showing that: T T σ,κ F 9ɛ T F For most typical users, samples from ( T σ,κ ) i are good recommendations with high probability.

48 Quantum Recommendation Algorithm Prepare state T i corresponding to row for user i.

49 Quantum Recommendation Algorithm Prepare state T i corresponding to row for user i. Apply quantum projection algorithm to T i to obtain ( T σ,κ ) i.

50 Quantum Recommendation Algorithm Prepare state T i corresponding to row for user i. Apply quantum projection algorithm to T i to obtain ( T σ,κ ) i. Measure projected state in computational basis to get recommendation.

51 Quantum Recommendation Algorithm Prepare state T i corresponding to row for user i. Apply quantum projection algorithm to T i to obtain ( T σ,κ ) i. Measure projected state in computational basis to get recommendation. The threshold σ = ɛ p A 2k F and κ = 1 3.

52 Quantum Recommendation Algorithm Prepare state T i corresponding to row for user i. Apply quantum projection algorithm to T i to obtain ( T σ,κ ) i. Measure projected state in computational basis to get recommendation. The threshold σ = ɛ p A 2k F and κ = 1 3. Running time depends on the threshold and not the condition number.

53 The projection algorithm Let A = i σ iu i vi t be the singular value decomposition, write input x = i α i v i.

54 The projection algorithm Let A = i σ iu i vi t be the singular value decomposition, write input x = i α i v i. Estimate singular values i α i v i σ i to additive error κσ/2.

55 The projection algorithm Let A = i σ iu i vi t be the singular value decomposition, write input x = i α i v i. Estimate singular values i α i v i σ i to additive error κσ/2. Map to i α i v i σ i t where t = 1 if σ i (1 κ/2)σ and erase σ i.

56 The projection algorithm Let A = i σ iu i vi t be the singular value decomposition, write input x = i α i v i. Estimate singular values i α i v i σ i to additive error κσ/2. Map to i α i v i σ i t where t = 1 if σ i (1 κ/2)σ and erase σ i. Post-select on t = 1.

57 The projection algorithm Let A = i σ iu i vi t be the singular value decomposition, write input x = i α i v i. Estimate singular values i α i v i σ i to additive error κσ/2. Map to i α i v i σ i t where t = 1 if σ i (1 κ/2)σ and erase σ i. Post-select on t = 1. The output A σ,κ x a projection the space of singular vectors with singular values σ and some in the range [(1 κ)σ, σ).

58 Open Questions Find a classical algorithm matrix sampling based recommendation algorithm that runs in time O(poly(k)polylog(mn)). OR Prove a lower bound to rule out such an algorithm. Find more quantum machine learning algorithms.

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra.

QALGO workshop, Riga. 1 / 26. Quantum algorithms for linear algebra. QALGO workshop, Riga. 1 / 26 Quantum algorithms for linear algebra., Center for Quantum Technologies and Nanyang Technological University, Singapore. September 22, 2015 QALGO workshop, Riga. 2 / 26 Overview

More information

Hamiltonian simulation and solving linear systems

Hamiltonian simulation and solving linear systems Hamiltonian simulation and solving linear systems Robin Kothari Center for Theoretical Physics MIT Quantum Optimization Workshop Fields Institute October 28, 2014 Ask not what you can do for quantum computing

More information

arxiv: v3 [quant-ph] 1 May 2017

arxiv: v3 [quant-ph] 1 May 2017 Quantum gradient descent for linear systems and least squares Iordanis Kerenidis Anupam Prakash arxiv:704.04992v3 [quant-ph] May 207 May 2, 207 Abstract Quantum Machine Learning is an exciting new area

More information

A Quantum Interior Point Method for LPs and SDPs

A Quantum Interior Point Method for LPs and SDPs A Quantum Interior Point Method for LPs and SDPs Iordanis Kerenidis 1 Anupam Prakash 1 1 CNRS, IRIF, Université Paris Diderot, Paris, France. September 26, 2018 Semi Definite Programs A Semidefinite Program

More information

Kaviti Sai Saurab. October 23,2014. IIT Kanpur. Kaviti Sai Saurab (UCLA) Quantum methods in ML October 23, / 8

Kaviti Sai Saurab. October 23,2014. IIT Kanpur. Kaviti Sai Saurab (UCLA) Quantum methods in ML October 23, / 8 A review of Quantum Algorithms for Nearest-Neighbor Methods for Supervised and Unsupervised Learning Nathan Wiebe,Ashish Kapoor,Krysta M. Svore Jan.2014 Kaviti Sai Saurab IIT Kanpur October 23,2014 Kaviti

More information

Lecture notes for quantum semidefinite programming (SDP) solvers

Lecture notes for quantum semidefinite programming (SDP) solvers CMSC 657, Intro to Quantum Information Processing Lecture on November 15 and 0, 018 Fall 018, University of Maryland Prepared by Tongyang Li, Xiaodi Wu Lecture notes for quantum semidefinite programming

More information

Quantum algorithm for linear systems of equations Final Report Mayank Sharma

Quantum algorithm for linear systems of equations Final Report Mayank Sharma Quantum algorithm for linear systems of equations Final Report Mayank Sharma Solving linear systems of equations is ubiquitous in all areas of science and engineering. With rapidly growing data sets, such

More information

Binary Principal Component Analysis in the Netflix Collaborative Filtering Task

Binary Principal Component Analysis in the Netflix Collaborative Filtering Task Binary Principal Component Analysis in the Netflix Collaborative Filtering Task László Kozma, Alexander Ilin, Tapani Raiko first.last@tkk.fi Helsinki University of Technology Adaptive Informatics Research

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

A quantum-inspired classical algorithm for recommendation systems

A quantum-inspired classical algorithm for recommendation systems A quantum-inspired classical algorithm for recommendation systems Ewin Tang arxiv:1807.04271v1 [cs.ir] 10 Jul 2018 July 13, 2018 Abstract A recommendation system suggests products to users based on data

More information

Quantum principal component analysis

Quantum principal component analysis Quantum principal component analysis The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Lloyd, Seth,

More information

COMS 4771 Introduction to Machine Learning. Nakul Verma

COMS 4771 Introduction to Machine Learning. Nakul Verma COMS 4771 Introduction to Machine Learning Nakul Verma Announcements HW1 due next lecture Project details are available decide on the group and topic by Thursday Last time Generative vs. Discriminative

More information

Collaborative Filtering. Radek Pelánek

Collaborative Filtering. Radek Pelánek Collaborative Filtering Radek Pelánek 2017 Notes on Lecture the most technical lecture of the course includes some scary looking math, but typically with intuitive interpretation use of standard machine

More information

High-precision quantum algorithms. Andrew Childs

High-precision quantum algorithms. Andrew Childs High-precision quantum algorithms Andrew hilds What can we do with a quantum computer? Factoring Many problems with polynomial speedup (combinatorial search, collision finding, graph properties, oolean

More information

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices)

SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Chapter 14 SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Today we continue the topic of low-dimensional approximation to datasets and matrices. Last time we saw the singular

More information

Introduction to Machine Learning. Introduction to ML - TAU 2016/7 1

Introduction to Machine Learning. Introduction to ML - TAU 2016/7 1 Introduction to Machine Learning Introduction to ML - TAU 2016/7 1 Course Administration Lecturers: Amir Globerson (gamir@post.tau.ac.il) Yishay Mansour (Mansour@tau.ac.il) Teaching Assistance: Regev Schweiger

More information

Some Formal Analysis of Rocchio s Similarity-Based Relevance Feedback Algorithm

Some Formal Analysis of Rocchio s Similarity-Based Relevance Feedback Algorithm Some Formal Analysis of Rocchio s Similarity-Based Relevance Feedback Algorithm Zhixiang Chen (chen@cs.panam.edu) Department of Computer Science, University of Texas-Pan American, 1201 West University

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

arxiv: v3 [quant-ph] 13 Feb 2018

arxiv: v3 [quant-ph] 13 Feb 2018 Quantum machine learning: a classical perspective Carlo Ciliberto 1, Mark Herbster 1, Alessandro Davide Ialongo 2,3, Massimiliano Pontil 1,4, Andrea Rocchetto 1,5, Simone Severini 1,6, and Leonard Wossnig

More information

Deep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang

Deep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang Deep Learning Basics Lecture 7: Factor Analysis Princeton University COS 495 Instructor: Yingyu Liang Supervised v.s. Unsupervised Math formulation for supervised learning Given training data x i, y i

More information

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013 UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2013 Exam policy: This exam allows two one-page, two-sided cheat sheets; No other materials. Time: 2 hours. Be sure to write your name and

More information

LU factorization with Panel Rank Revealing Pivoting and its Communication Avoiding version

LU factorization with Panel Rank Revealing Pivoting and its Communication Avoiding version 1 LU factorization with Panel Rank Revealing Pivoting and its Communication Avoiding version Amal Khabou Advisor: Laura Grigori Université Paris Sud 11, INRIA Saclay France SIAMPP12 February 17, 2012 2

More information

arxiv: v1 [quant-ph] 15 Feb 2016

arxiv: v1 [quant-ph] 15 Feb 2016 Quantum Perceptron Models Nathan Wiebe, Ashish Kapoor, Krysta M. Svore Microsoft Research, One Microsoft Way, Redmond WA 98052 We demonstrate how quantum computation can provide non-trivial improvements

More information

CS 231A Section 1: Linear Algebra & Probability Review

CS 231A Section 1: Linear Algebra & Probability Review CS 231A Section 1: Linear Algebra & Probability Review 1 Topics Support Vector Machines Boosting Viola-Jones face detector Linear Algebra Review Notation Operations & Properties Matrix Calculus Probability

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [based on slides from Nina Balcan] slide 1 Goals for the lecture you should understand

More information

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang CS 231A Section 1: Linear Algebra & Probability Review Kevin Tang Kevin Tang Section 1-1 9/30/2011 Topics Support Vector Machines Boosting Viola Jones face detector Linear Algebra Review Notation Operations

More information

Recommendation Systems

Recommendation Systems Recommendation Systems Popularity Recommendation Systems Predicting user responses to options Offering news articles based on users interests Offering suggestions on what the user might like to buy/consume

More information

Kaggle.

Kaggle. Administrivia Mini-project 2 due April 7, in class implement multi-class reductions, naive bayes, kernel perceptron, multi-class logistic regression and two layer neural networks training set: Project

More information

Quantum machine learning for quantum anomaly detection CQT AND SUTD, SINGAPORE ARXIV:

Quantum machine learning for quantum anomaly detection CQT AND SUTD, SINGAPORE ARXIV: Quantum machine learning for quantum anomaly detection NANA LIU CQT AND SUTD, SINGAPORE ARXIV:1710.07405 TUESDAY 7 TH NOVEMBER 2017 QTML 2017, VERONA Types of anomaly detection algorithms Classification-based

More information

a Short Introduction

a Short Introduction Collaborative Filtering in Recommender Systems: a Short Introduction Norm Matloff Dept. of Computer Science University of California, Davis matloff@cs.ucdavis.edu December 3, 2016 Abstract There is a strong

More information

1 Learning Linear Separators

1 Learning Linear Separators 10-601 Machine Learning Maria-Florina Balcan Spring 2015 Plan: Perceptron algorithm for learning linear separators. 1 Learning Linear Separators Here we can think of examples as being from {0, 1} n or

More information

Machine Learning. CUNY Graduate Center, Spring Lectures 11-12: Unsupervised Learning 1. Professor Liang Huang.

Machine Learning. CUNY Graduate Center, Spring Lectures 11-12: Unsupervised Learning 1. Professor Liang Huang. Machine Learning CUNY Graduate Center, Spring 2013 Lectures 11-12: Unsupervised Learning 1 (Clustering: k-means, EM, mixture models) Professor Liang Huang huang@cs.qc.cuny.edu http://acl.cs.qc.edu/~lhuang/teaching/machine-learning

More information

LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version

LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version Amal Khabou James Demmel Laura Grigori Ming Gu Electrical Engineering and Computer Sciences University of California

More information

arxiv: v1 [quant-ph] 22 Feb 2018

arxiv: v1 [quant-ph] 22 Feb 2018 Quantum linear systems algorithms: a primer Danial Dervovic 1, Mark Herbster 1, Peter Mountney 1,, Simone Severini 1, Naïri Usher 1, and Leonard Wossnig 1 1 Department of Computer Science, University College

More information

Quantum Divide-and-Conquer Anchoring for Separable Non-negative Matrix Factorization

Quantum Divide-and-Conquer Anchoring for Separable Non-negative Matrix Factorization arxiv:1802.07828v1 [quant-ph] 20 Feb 2018 Quantum Divide-and-Conquer Anchoring for Separable Non-negative Matrix Factorization Yuxuan Du Tongliang Liu Yinan Li Runyao Duan Dacheng Tao Abstract It is NP-complete

More information

Distributed Box-Constrained Quadratic Optimization for Dual Linear SVM

Distributed Box-Constrained Quadratic Optimization for Dual Linear SVM Distributed Box-Constrained Quadratic Optimization for Dual Linear SVM Lee, Ching-pei University of Illinois at Urbana-Champaign Joint work with Dan Roth ICML 2015 Outline Introduction Algorithm Experiments

More information

Decoupled Collaborative Ranking

Decoupled Collaborative Ranking Decoupled Collaborative Ranking Jun Hu, Ping Li April 24, 2017 Jun Hu, Ping Li WWW2017 April 24, 2017 1 / 36 Recommender Systems Recommendation system is an information filtering technique, which provides

More information

Simulation of quantum computers with probabilistic models

Simulation of quantum computers with probabilistic models Simulation of quantum computers with probabilistic models Vlad Gheorghiu Department of Physics Carnegie Mellon University Pittsburgh, PA 15213, U.S.A. April 6, 2010 Vlad Gheorghiu (CMU) Simulation of quantum

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Saniv Kumar, Google Research, NY EECS-6898, Columbia University - Fall, 010 Saniv Kumar 9/13/010 EECS6898 Large Scale Machine Learning 1 Curse of Dimensionality Gaussian Mixture Models

More information

Learning representations

Learning representations Learning representations Optimization-Based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_spring16 Carlos Fernandez-Granda 4/11/2016 General problem For a dataset of n signals X := [ x 1 x

More information

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning

Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning Anton Rodomanov Higher School of Economics, Russia Bayesian methods research group (http://bayesgroup.ru) 14 March

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Kernel Methods and Support Vector Machines Oliver Schulte - CMPT 726 Bishop PRML Ch. 6 Support Vector Machines Defining Characteristics Like logistic regression, good for continuous input features, discrete

More information

CS246 Final Exam. March 16, :30AM - 11:30AM

CS246 Final Exam. March 16, :30AM - 11:30AM CS246 Final Exam March 16, 2016 8:30AM - 11:30AM Name : SUID : I acknowledge and accept the Stanford Honor Code. I have neither given nor received unpermitted help on this examination. (signed) Directions

More information

arxiv: v1 [quant-ph] 24 Nov 2017

arxiv: v1 [quant-ph] 24 Nov 2017 Efficient quantum circuit for singular value thresholding Bojia Duan, Jiabin Yuan, Ying Liu, Dan Li College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, No.29

More information

CS246 Final Exam, Winter 2011

CS246 Final Exam, Winter 2011 CS246 Final Exam, Winter 2011 1. Your name and student ID. Name:... Student ID:... 2. I agree to comply with Stanford Honor Code. Signature:... 3. There should be 17 numbered pages in this exam (including

More information

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora

Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 14: SVD, Power method, and Planted Graph problems (+ eigenvalues of random matrices) Lecturer: Sanjeev Arora Scribe: Today we continue the

More information

Estimating Dominance Norms of Multiple Data Streams Graham Cormode Joint work with S. Muthukrishnan

Estimating Dominance Norms of Multiple Data Streams Graham Cormode Joint work with S. Muthukrishnan Estimating Dominance Norms of Multiple Data Streams Graham Cormode graham@dimacs.rutgers.edu Joint work with S. Muthukrishnan Data Stream Phenomenon Data is being produced faster than our ability to process

More information

L26: Advanced dimensionality reduction

L26: Advanced dimensionality reduction L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms François Caron Department of Statistics, Oxford STATLEARN 2014, Paris April 7, 2014 Joint work with Adrien Todeschini,

More information

Lecture 8. Instructor: Haipeng Luo

Lecture 8. Instructor: Haipeng Luo Lecture 8 Instructor: Haipeng Luo Boosting and AdaBoost In this lecture we discuss the connection between boosting and online learning. Boosting is not only one of the most fundamental theories in machine

More information

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University

Matrix completion: Fundamental limits and efficient algorithms. Sewoong Oh Stanford University Matrix completion: Fundamental limits and efficient algorithms Sewoong Oh Stanford University 1 / 35 Low-rank matrix completion Low-rank Data Matrix Sparse Sampled Matrix Complete the matrix from small

More information

Linear Discrimination Functions

Linear Discrimination Functions Laurea Magistrale in Informatica Nicola Fanizzi Dipartimento di Informatica Università degli Studi di Bari November 4, 2009 Outline Linear models Gradient descent Perceptron Minimum square error approach

More information

A Simple Quantum Neural Net with a Periodic Activation Function

A Simple Quantum Neural Net with a Periodic Activation Function A Simple Quantum Neural Net with a Periodic Activation Function Ammar Daskin Department of Computer Engineering, Istanbul Medeniyet University, Kadikoy, Istanbul, Turkey Email: adaskin25-at-gmail-dot-com

More information

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 1: Quantum circuits and the abelian QFT

Quantum algorithms (CO 781, Winter 2008) Prof. Andrew Childs, University of Waterloo LECTURE 1: Quantum circuits and the abelian QFT Quantum algorithms (CO 78, Winter 008) Prof. Andrew Childs, University of Waterloo LECTURE : Quantum circuits and the abelian QFT This is a course on quantum algorithms. It is intended for graduate students

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 5: Numerical Linear Algebra Cho-Jui Hsieh UC Davis April 20, 2017 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Numerical Linear Algebra Background Cho-Jui Hsieh UC Davis May 15, 2018 Linear Algebra Background Vectors A vector has a direction and a magnitude

More information

The Perceptron Algorithm, Margins

The Perceptron Algorithm, Margins The Perceptron Algorithm, Margins MariaFlorina Balcan 08/29/2018 The Perceptron Algorithm Simple learning algorithm for supervised classification analyzed via geometric margins in the 50 s [Rosenblatt

More information

A Simple Algorithm for Nuclear Norm Regularized Problems

A Simple Algorithm for Nuclear Norm Regularized Problems A Simple Algorithm for Nuclear Norm Regularized Problems ICML 00 Martin Jaggi, Marek Sulovský ETH Zurich Matrix Factorizations for recommender systems Y = Customer Movie UV T = u () The Netflix challenge:

More information

EVD & SVD EVD & SVD. Durgesh Kumar. June 10, 2016

EVD & SVD EVD & SVD. Durgesh Kumar. June 10, 2016 EVD & SVD Durgesh Kumar June 10, 2016 Table of contents 1 Why bother so much for SVD and EVD? 2 Pre-requisite Fundamental of Matrix Multiplication Eigen Value and Eigen Vector 3 EVD Defintion of EVD Use

More information

More about the Perceptron

More about the Perceptron More about the Perceptron CMSC 422 MARINE CARPUAT marine@cs.umd.edu Credit: figures by Piyush Rai and Hal Daume III Recap: Perceptron for binary classification Classifier = hyperplane that separates positive

More information

Statistical Machine Learning Theory. From Multi-class Classification to Structured Output Prediction. Hisashi Kashima.

Statistical Machine Learning Theory. From Multi-class Classification to Structured Output Prediction. Hisashi Kashima. http://goo.gl/jv7vj9 Course website KYOTO UNIVERSITY Statistical Machine Learning Theory From Multi-class Classification to Structured Output Prediction Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT

More information

Estimating the Largest Elements of a Matrix

Estimating the Largest Elements of a Matrix Estimating the Largest Elements of a Matrix Samuel Relton samuel.relton@manchester.ac.uk @sdrelton samrelton.com blog.samrelton.com Joint work with Nick Higham nick.higham@manchester.ac.uk May 12th, 2016

More information

Lecture 7: Passive Learning

Lecture 7: Passive Learning CS 880: Advanced Complexity Theory 2/8/2008 Lecture 7: Passive Learning Instructor: Dieter van Melkebeek Scribe: Tom Watson In the previous lectures, we studied harmonic analysis as a tool for analyzing

More information

Advances in quantum machine learning

Advances in quantum machine learning Advances in quantum machine learning arxiv:1512.02900v1 [quant-ph] 9 Dec 2015 J. C. Adcock, E. Allen, M. Day*, S. Frick, J. Hinchliff, M. Johnson, S. Morley-Short, S. Pallister, A. B. Price, S. Stanisic

More information

High Performance Parallel Tucker Decomposition of Sparse Tensors

High Performance Parallel Tucker Decomposition of Sparse Tensors High Performance Parallel Tucker Decomposition of Sparse Tensors Oguz Kaya INRIA and LIP, ENS Lyon, France SIAM PP 16, April 14, 2016, Paris, France Joint work with: Bora Uçar, CNRS and LIP, ENS Lyon,

More information

Numerical Learning Algorithms

Numerical Learning Algorithms Numerical Learning Algorithms Example SVM for Separable Examples.......................... Example SVM for Nonseparable Examples....................... 4 Example Gaussian Kernel SVM...............................

More information

High-Dimensional Indexing by Distributed Aggregation

High-Dimensional Indexing by Distributed Aggregation High-Dimensional Indexing by Distributed Aggregation Yufei Tao ITEE University of Queensland In this lecture, we will learn a new approach for indexing high-dimensional points. The approach borrows ideas

More information

1 Learning Linear Separators

1 Learning Linear Separators 8803 Machine Learning Theory Maria-Florina Balcan Lecture 3: August 30, 2011 Plan: Perceptron algorithm for learning linear separators. 1 Learning Linear Separators Here we can think of examples as being

More information

Scikit-learn. scikit. Machine learning for the small and the many Gaël Varoquaux. machine learning in Python

Scikit-learn. scikit. Machine learning for the small and the many Gaël Varoquaux. machine learning in Python Scikit-learn Machine learning for the small and the many Gaël Varoquaux scikit machine learning in Python In this meeting, I represent low performance computing Scikit-learn Machine learning for the small

More information

Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems

Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems Sewoong Oh Massachusetts Institute of Technology joint work with David R. Karger and Devavrat Shah September 28, 2011 1 / 13 Crowdsourcing

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 22 1 / 21 Overview

More information

Matrix Completion: Fundamental Limits and Efficient Algorithms

Matrix Completion: Fundamental Limits and Efficient Algorithms Matrix Completion: Fundamental Limits and Efficient Algorithms Sewoong Oh PhD Defense Stanford University July 23, 2010 1 / 33 Matrix completion Find the missing entries in a huge data matrix 2 / 33 Example

More information

Nearest Neighbors Methods for Support Vector Machines

Nearest Neighbors Methods for Support Vector Machines Nearest Neighbors Methods for Support Vector Machines A. J. Quiroz, Dpto. de Matemáticas. Universidad de Los Andes joint work with María González-Lima, Universidad Simón Boĺıvar and Sergio A. Camelo, Universidad

More information

STA141C: Big Data & High Performance Statistical Computing

STA141C: Big Data & High Performance Statistical Computing STA141C: Big Data & High Performance Statistical Computing Lecture 6: Numerical Linear Algebra: Applications in Machine Learning Cho-Jui Hsieh UC Davis April 27, 2017 Principal Component Analysis Principal

More information

Statistical Machine Learning Theory. From Multi-class Classification to Structured Output Prediction. Hisashi Kashima.

Statistical Machine Learning Theory. From Multi-class Classification to Structured Output Prediction. Hisashi Kashima. http://goo.gl/xilnmn Course website KYOTO UNIVERSITY Statistical Machine Learning Theory From Multi-class Classification to Structured Output Prediction Hisashi Kashima kashima@i.kyoto-u.ac.jp DEPARTMENT

More information

CSCI-567: Machine Learning (Spring 2019)

CSCI-567: Machine Learning (Spring 2019) CSCI-567: Machine Learning (Spring 2019) Prof. Victor Adamchik U of Southern California Mar. 19, 2019 March 19, 2019 1 / 43 Administration March 19, 2019 2 / 43 Administration TA3 is due this week March

More information

Using SVD to Recommend Movies

Using SVD to Recommend Movies Michael Percy University of California, Santa Cruz Last update: December 12, 2009 Last update: December 12, 2009 1 / Outline 1 Introduction 2 Singular Value Decomposition 3 Experiments 4 Conclusion Last

More information

Succinct Data Structures for Approximating Convex Functions with Applications

Succinct Data Structures for Approximating Convex Functions with Applications Succinct Data Structures for Approximating Convex Functions with Applications Prosenjit Bose, 1 Luc Devroye and Pat Morin 1 1 School of Computer Science, Carleton University, Ottawa, Canada, K1S 5B6, {jit,morin}@cs.carleton.ca

More information

Cutting Plane Training of Structural SVM

Cutting Plane Training of Structural SVM Cutting Plane Training of Structural SVM Seth Neel University of Pennsylvania sethneel@wharton.upenn.edu September 28, 2017 Seth Neel (Penn) Short title September 28, 2017 1 / 33 Overview Structural SVMs

More information

Tutorial: PART 2. Online Convex Optimization, A Game- Theoretic Approach to Learning

Tutorial: PART 2. Online Convex Optimization, A Game- Theoretic Approach to Learning Tutorial: PART 2 Online Convex Optimization, A Game- Theoretic Approach to Learning Elad Hazan Princeton University Satyen Kale Yahoo Research Exploiting curvature: logarithmic regret Logarithmic regret

More information

A Deterministic Rescaled Perceptron Algorithm

A Deterministic Rescaled Perceptron Algorithm A Deterministic Rescaled Perceptron Algorithm Javier Peña Negar Soheili June 5, 03 Abstract The perceptron algorithm is a simple iterative procedure for finding a point in a convex cone. At each iteration,

More information

Graph Functional Methods for Climate Partitioning

Graph Functional Methods for Climate Partitioning Graph Functional Methods for Climate Partitioning Mathilde Mougeot - with D. Picard, V. Lefieux*, M. Marchand* Université Paris Diderot, France *Réseau Transport Electrique (RTE) Buenos Aires, 2015 Mathilde

More information

Introduction to Quantum Computing

Introduction to Quantum Computing Introduction to Quantum Computing Part II Emma Strubell http://cs.umaine.edu/~ema/quantum_tutorial.pdf April 13, 2011 Overview Outline Grover s Algorithm Quantum search A worked example Simon s algorithm

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University October 11, 2012 Today: Computational Learning Theory Probably Approximately Coorrect (PAC) learning theorem

More information

Lecture 9: SVD, Low Rank Approximation

Lecture 9: SVD, Low Rank Approximation CSE 521: Design and Analysis of Algorithms I Spring 2016 Lecture 9: SVD, Low Rank Approimation Lecturer: Shayan Oveis Gharan April 25th Scribe: Koosha Khalvati Disclaimer: hese notes have not been subjected

More information

Machine Learning (CSE 446): Learning as Minimizing Loss; Least Squares

Machine Learning (CSE 446): Learning as Minimizing Loss; Least Squares Machine Learning (CSE 446): Learning as Minimizing Loss; Least Squares Sham M Kakade c 2018 University of Washington cse446-staff@cs.washington.edu 1 / 13 Review 1 / 13 Alternate View of PCA: Minimizing

More information

Simple and Deterministic Matrix Sketches

Simple and Deterministic Matrix Sketches Simple and Deterministic Matrix Sketches Edo Liberty + ongoing work with: Mina Ghashami, Jeff Philips and David Woodruff. Edo Liberty: Simple and Deterministic Matrix Sketches 1 / 41 Data Matrices Often

More information

Introduction to Machine Learning. Regression. Computer Science, Tel-Aviv University,

Introduction to Machine Learning. Regression. Computer Science, Tel-Aviv University, 1 Introduction to Machine Learning Regression Computer Science, Tel-Aviv University, 2013-14 Classification Input: X Real valued, vectors over real. Discrete values (0,1,2,...) Other structures (e.g.,

More information

arxiv: v2 [quant-ph] 28 Aug 2017

arxiv: v2 [quant-ph] 28 Aug 2017 Implementing a distance-based classifier with a quantum interference circuit Maria Schuld,, Mark Fingerhuth,, 2,, 3, and Francesco Petruccione Quantum Research Group, School of Chemistry and Physics, University

More information

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms

Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Probabilistic Low-Rank Matrix Completion with Adaptive Spectral Regularization Algorithms Adrien Todeschini Inria Bordeaux JdS 2014, Rennes Aug. 2014 Joint work with François Caron (Univ. Oxford), Marie

More information

15-859E: Advanced Algorithms CMU, Spring 2015 Lecture #16: Gradient Descent February 18, 2015

15-859E: Advanced Algorithms CMU, Spring 2015 Lecture #16: Gradient Descent February 18, 2015 5-859E: Advanced Algorithms CMU, Spring 205 Lecture #6: Gradient Descent February 8, 205 Lecturer: Anupam Gupta Scribe: Guru Guruganesh In this lecture, we will study the gradient descent algorithm and

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

Provable Alternating Minimization Methods for Non-convex Optimization

Provable Alternating Minimization Methods for Non-convex Optimization Provable Alternating Minimization Methods for Non-convex Optimization Prateek Jain Microsoft Research, India Joint work with Praneeth Netrapalli, Sujay Sanghavi, Alekh Agarwal, Animashree Anandkumar, Rashish

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

The Power of Asymmetry in Binary Hashing

The Power of Asymmetry in Binary Hashing The Power of Asymmetry in Binary Hashing Behnam Neyshabur Yury Makarychev Toyota Technological Institute at Chicago Russ Salakhutdinov University of Toronto Nati Srebro Technion/TTIC Search by Image Image

More information

Machine Learning for NLP

Machine Learning for NLP Machine Learning for NLP Uppsala University Department of Linguistics and Philology Slides borrowed from Ryan McDonald, Google Research Machine Learning for NLP 1(50) Introduction Linear Classifiers Classifiers

More information

The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt.

The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt. 1 The perceptron learning algorithm is one of the first procedures proposed for learning in neural network models and is mostly credited to Rosenblatt. The algorithm applies only to single layer models

More information

Big Data Analytics: Optimization and Randomization

Big Data Analytics: Optimization and Randomization Big Data Analytics: Optimization and Randomization Tianbao Yang Tutorial@ACML 2015 Hong Kong Department of Computer Science, The University of Iowa, IA, USA Nov. 20, 2015 Yang Tutorial for ACML 15 Nov.

More information

Privacy-Preserving Data Mining

Privacy-Preserving Data Mining CS 380S Privacy-Preserving Data Mining Vitaly Shmatikov slide 1 Reading Assignment Evfimievski, Gehrke, Srikant. Limiting Privacy Breaches in Privacy-Preserving Data Mining (PODS 2003). Blum, Dwork, McSherry,

More information