Quantum Artificial Intelligence and Machine Learning: The Path to Enterprise Deployments. Randall Correll. +1 (703) Palo Alto, CA

Similar documents
CONTEMPORARY ANALYTICAL ECOSYSTEM PATRICK HALL, SAS INSTITUTE

Deep Learning Autoencoder Models

Machine Learning. Boris

DANIEL WILSON AND BEN CONKLIN. Integrating AI with Foundation Intelligence for Actionable Intelligence

Advancing Machine Learning and AI with Geography and GIS. Robert Kircher

Deep Learning Architectures and Algorithms

Deep Learning with Coherent Nanophotonic Circuits

Deep Learning. What Is Deep Learning? The Rise of Deep Learning. Long History (in Hind Sight)

Continuous Machine Learning

Identifying QCD transition using Deep Learning

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Grundlagen der Künstlichen Intelligenz

Enforcing constraints for interpolation and extrapolation in Generative Adversarial Networks

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Deep Learning. What Is Deep Learning? The Rise of Deep Learning. Long History (in Hind Sight)

Machine Learning for Physicists Lecture 1

Deep Learning intro and hands-on tutorial

Quantum computing with superconducting qubits Towards useful applications

Opportunities and challenges in quantum-enhanced machine learning in near-term quantum computers

Machine Learning for Gravitational Wave signals classification in LIGO and Virgo

The Quantum Landscape

CSC321 Lecture 20: Autoencoders

Learning Deep Architectures for AI. Part II - Vijay Chakilam

The Changing Landscape of Land Administration

Machine learning comes from Bayesian decision theory in statistics. There we want to minimize the expected value of the loss function.

NEURAL LANGUAGE MODELS

Kaviti Sai Saurab. October 23,2014. IIT Kanpur. Kaviti Sai Saurab (UCLA) Quantum methods in ML October 23, / 8

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /9/17

Neural Networks. Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016

Deep Learning Lab Course 2017 (Deep Learning Practical)

David Silver, Google DeepMind

SGD and Deep Learning

Deep Learning for NLP

WaveNet: A Generative Model for Raw Audio

PROBABILISTIC PROGRAMMING: BAYESIAN MODELLING MADE EASY

Acceleration of WRF on the GPU

Machine learning in Higgs analyses

Qubits qop Tools Directions

Processor & SOC Architecture

Recurrent Neural Networks

CS 570: Machine Learning Seminar. Fall 2016

Feature Design. Feature Design. Feature Design. & Deep Learning

Introduction to Reinforcement Learning

Deep neural networks and fraud detection

ArcGIS GeoAnalytics Server: An Introduction. Sarah Ambrose and Ravi Narayanan

Deep Learning: a gentle introduction

A Little History of Machine Learning

Need for Deep Networks Perceptron. Can only model linear functions. Kernel Machines. Non-linearity provided by kernels

Deep Learning Basics Lecture 7: Factor Analysis. Princeton University COS 495 Instructor: Yingyu Liang

Using Artificial Intelligence to Train Your Chatbot.

Deep unsupervised learning

Machine Learning with Quantum-Inspired Tensor Networks

michele piana dipartimento di matematica, universita di genova cnr spin, genova

Reinforcement. Function Approximation. Learning with KATJA HOFMANN. Researcher, MSR Cambridge

INF 5860 Machine learning for image classification. Lecture 5 : Introduction to TensorFlow Tollef Jahren February 14, 2018

Introducing a Bioinformatics Similarity Search Solution

Reinforcement Learning

Opportunities and challenges in near-term quantum computers

Unsupervised Machine Learning on a Hybrid Quantum Computer Johannes Otterbach. Bay Area Quantum Computing Meetup - YCombinator February 1, 2018

Recurrent Neural Networks. COMP-550 Oct 5, 2017

Branes with Brains. Reinforcement learning in the landscape of intersecting brane worlds. String_Data 2017, Boston 11/30/2017

ArcGIS Deployment Pattern. Azlina Mahad

using Low-Depth Quantum Circuits Guillaume Verdon Co-Founder & Chief Scientific Officer

Integrating Globus into a Science Gateway for Cryo-EM

Introduction to Neural Networks

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

IBM Systems for Cognitive Solutions

CS230: Lecture 9 Deep Reinforcement Learning

Summary of Hyperion Research's First QC Expert Panel Survey Questions/Answers. Bob Sorensen, Earl Joseph, Steve Conway, and Alex Norton

CS 700: Quantitative Methods & Experimental Design in Computer Science

PATTERN RECOGNITION AND MACHINE LEARNING

Deep Learning for Gravitational Wave Analysis Results with LIGO Data

Lecture 8: Introduction to Deep Learning: Part 2 (More on backpropagation, and ConvNets)

Quantum Computing. Separating the 'hope' from the 'hype' Suzanne Gildert (D-Wave Systems, Inc) 4th September :00am PST, Teleplace

Quantum Computing at IBM

A Reconfigurable Quantum Computer

Deep Learning Lecture 2

Scalable Asynchronous Gradient Descent Optimization for Out-of-Core Models

ARTIFICIAL NEURAL NETWORKS گروه مطالعاتي 17 بهار 92

Classification of Higgs Boson Tau-Tau decays using GPU accelerated Neural Networks

Building a Multi-FPGA Virtualized Restricted Boltzmann Machine Architecture Using Embedded MPI

A Service Architecture for Processing Big Earth Data in the Cloud with Geospatial Analytics and Machine Learning

7.1 Basis for Boltzmann machine. 7. Boltzmann machines

Post Von Neumann Computing

UNSUPERVISED LEARNING

(Advances in) Quantum Reinforcement Learning. Vedran Dunjko

Computation of Large Sparse Aggregated Areas for Analytic Database Queries

A feasible path to a Turing machine

INF 5860 Machine learning for image classification. Lecture 14: Reinforcement learning May 9, 2018

Mitosis Detection in Breast Cancer Histology Images with Multi Column Deep Neural Networks

Neural Networks. Mark van Rossum. January 15, School of Informatics, University of Edinburgh 1 / 28

Deep Generative Models for Graph Generation. Jian Tang HEC Montreal CIFAR AI Chair, Mila

Individual Household Financial Planning

Machine Learning Techniques

Tutorial on: Optimization I. (from a deep learning perspective) Jimmy Ba

Forecasting demand in the National Electricity Market. October 2017

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016

ArcGIS is Advancing. Both Contributing and Integrating many new Innovations. IoT. Smart Mapping. Smart Devices Advanced Analytics

arxiv: v1 [astro-ph.im] 20 Jan 2017

Course 395: Machine Learning

Transcription:

Quantum Artificial Intelligence and Machine : The Path to Enterprise Deployments Randall Correll randall.correll@qcware.com +1 (703) 867-2395 Palo Alto, CA 1

Bundled software and services Professional Services Software Development QC Ware s Application Libraries Finance, Engineering, Commerce AQUA Cloud Problem Mappings Optimization API ML/AI API Quantum Education Classical to Quantum Conversion Engine Annealing & Circuit Model Optimization Algorithms Universal Quantum Hardware Interface 2

Quantum AI and ML are some of the biggest touted use cases for quantum computers 3

Overview: the why, what, and how of quantum AI and ML Why is QC needed for AI/ML computing - what are the anticipated future performance deficiencies of classical computing resources? What AI/ML problems can (potentially) be addressed by quantum computing in the near term? How will quantum computing be built into AI/ML computing workflows? 4

AI and ML Overview: ML as an approach to making AI Artificial Intelligence Logicbased Machine Knowledge -based Reinforcement Supervised Unsupervised Algorithms learn from the data itself Algorithms make predictions from experience * Note that the lines between the concepts/terms here are blurry. 5

AI and ML Overview: ML as an approach to making AI Roomba Autonomous system path planning Artificial Intelligence Logicbased Machine Knowledge -based Reinforcement Supervised Unsupervised Algorithms learn from the data itself Algorithms make predictions from experience 6

AI and ML Overview: ML as an approach to making AI Artificial Intelligence Knowledge -based Logicbased Reinforcement Machine Supervised Google News Facebook face recognition Unsupervised Algorithms learn from the data itself Algorithms make predictions from experience 7

AI and ML Overview: ML as an approach to making AI Artificial Intelligence Logicbased Machine Knowledge -based Reinforcement Supervised Netflix (Movie Recommendation) Algorithms learn from the data itself Unsupervised Algorithms make predictions from experience 8

AI and ML Overview: ML as an approach to making AI Artificial Intelligence Knowledge -based Logicbased Reinforcement Machine Supervised IBM Watson Apple Siri Google Now Unsupervised Algorithms learn from the data itself Algorithms make predictions from experience 9

AI and ML Overview: ML as an approach to making AI Google AlphaGo Artificial Intelligence Logicbased Machine Knowledge -based Reinforcement Supervised Unsupervised Algorithms learn from the data itself Algorithms make predictions from rules and experience 10

Supervised example: the Netflix Prize problem Goal: Predict the rating a user would give to a movie that he or she hasn t yet rated Input: Training data of user ratings of movies Typical problem size: Prize data from 2009 was 100+ million ratings of 17,000+ movies, by 480,000+ users 11

Unsupervised example: clustering for Google News Goal: Input: Typical problem size: Assign each news story to a cluster (so that stories about the same topic can be shown together) A set of news stories from today Aggregation of content from 50,000+ news sources 12

Reinforcement example: playing Pong (Atari game) Goal: Learn how to win at the game Pong by repeatedly playing the game and learning which actions lead to winning and which lead to losing See also: Typical problem size: 10 170 possible legal game positions V. Mnih, et al. Nature 518, 529-533 (2015) 13

Challenges for enterprise AI and ML applications Speed Power Accuracy Algorithmic Challenges Implementation Challenges 1. Improving representation power 2. Non-convex optimization (beyond SGD) 3. Minimizing generalization errors 4. Robustness to noisy labeled data 5. Robustness to adversarial attacks 1. Training time 2. Lower power consumption 3. Meaningful bit precision 4. Latency 5. Robustness to dynamical variations in data 6. Limitations of data sharing (privacy issues) 14

Quantum Artificial Intelligence and Machine Algorithms Machine Logic-based 1. Path-planning optimization (QA) QA: runs on Quantum Annealers SQC: runs on nearterm QCs (shallow circuits) FTQC: runs on faulttolerant QCs Supervised 1. Support Vector Machines (FTQC) 2. Cluster assignment (FTQC) 3. Recommendation systems (FTQC) 4. Quantum deep learning (FTQC) 5. Quantum Boltzmann Machine training (QA) Unsupervised 1. Clustering (cluster finding) 1a. Quantum PCA (FTQC) 1b. K-means (FTQC) 2. Topological data analysis (FTQC) 3. Autoencoders (SQC) Reinforcement 1. Quantum and Deep Boltzmann Machine training (QA) Notably 26 missing Sep 2017 so far: quantum algorithms to accelerate deep neural networks as in current widespread use. 15

Caveat: We don t yet know which QML algorithms will deliver meaningful speedups to enterprise problems Application of known QML primitives to enterprise use cases: Finding out which existing quantum machine learning algorithms can be used to speed up current enterprise ML tasks is a topic of current research. Classical-quantum data loading and storage: Many of the QML algorithms developed so far operate on quantum data (not classical data), and there is an important open question about how to efficiently implement Quantum- RAM, which provides quantum data when the underlying problem is classical. Quantum speedup for quantum data: For problems for which quantum data is readily available (e.g., in quantum chemistry), some QML algorithms are exponentially faster than their classical counterparts. For more 26 Sep detail, 2017 see: S. Aaronson. Nature Physics 11, 291-293 (2015) 16

QML algorithms possibly providing an end-to-end speedup Quantum Recommendation Systems Topological Data Analysis ayasdi.com Another caveat: even for these two examples, it s not 100% clear that a speedup exists for practically relevant use cases. 17

ML processing and end-to-end speedup Supervised workflow Training Stage Training data Training Trained model Typically takes hours/days. Inference Stage Prediction Classification/Prediction Storage Unseen data Typically requires millisecond response times. Goal: Speed up the end-to-end process (not just a part of it) for Training, Inference, or both. 18

Types of ML problems that QC Ware believes are best-suited to be run on quantum annealers in the medium term Problems with long training times, slow relaxations, or nonscalable samplings Prediction of default of financial assets (loans, mortgages, bonds, etc.) Low-dimensional problems (small number of features) to reduce embedding overheads Problems for which a quantum annealer can give a speedup (e.g., non-convex optimization with tall and narrow barriers) 19

Types of ML problems that QC Ware believes are best-suited to be run on circuit-model quantum computers in the medium term Problems with long training times, slow relaxations, or nonscalable samplings Recommendation systems Problems that have small (space x time) volume, so they can fit and run on nonfault-tolerant QCs Problems for which a quantum algorithm can give a speedup (e.g., inefficient classical representation) 20

Classical hardware approaches to providing speedup and reducing power consumption uses FPGAs. uses TPUs and GPUs. uses GPUs. uses FPGAs and GPUs. [Insert Cloud Platform Provider Here] use other ASICs. 21

Integration of quantum processors into ML technology stacks Today (classical hardware only) Applications Recommendation systems, image search, etc. ML Frameworks Keras, TensorFlow, Caffe, Torch, Theano, etc. Raw Computation Libraries/APIs/Languages BLAS, CUDA, Verilog, etc. Data Storage System Hadoop HDFS, NoSQL databases, etc. CPUs GPUs TPUs FPGAs Storage Hardware 22

Integration of quantum processors into ML technology stacks Today (classical hardware only) Tomorrow (hybrid classical and quantum hardware and software interface) Applications Recommendation systems, image search, etc. QML Applications Applications ML Frameworks Keras, TensorFlow, Caffe, Torch, Theano, etc. QML Interface ML Frameworks Raw Computation Libraries/APIs/Languages BLAS, CUDA, Verilog, etc. Data Storage System Hadoop HDFS, NoSQL databases, etc. Raw Computation Libraries / APIs / Languages Quantum Computation Libraries / APIs / Languages Data Storage System CPUs GPUs TPUs FPGAs Storage Hardware QPUs CPUs GPUs TPUs FPGAs Storage Hardware 23

Conclusion Over the next 5 years, AI/ML/Autonomy stack will include a variety of processors, and both quantum annealers and circuit-model QC should accelerate performance Many QML algorithmic primitives (or building blocks) exist. However, the number of quantum algorithms that are known to possibly give end-to-end speedups is small. Work is urgently needed to find other use cases where end-to-end speedup is possible. Standard feed-forward, convolutional, and recurrent neural networks have seen widespread adoption in many different areas of ML over the past ~5 years. A major open challenge is how quantum computers may either accelerate the training or inference stages of neural networks, or be used to improve accuracy, or both. 24