Linear logic and deep learning. Huiyi Hu, Daniel Murfet

Size: px
Start display at page:

Download "Linear logic and deep learning. Huiyi Hu, Daniel Murfet"

Transcription

1 Linear logic and deep learning Huiyi Hu, Daniel Murfet

2 Move 37, Game 2, Sedol-lphaGo It s not a human move. I ve never seen a human play this move So beautiful. - Fan Hui, three-time European Go champion.

3 Outline Introduction to deep learning What is differentiable programming? Linear logic and the polynomial semantics Our model

4 2 Intro to deep learning deep neural network (DNN) is a particular kind of function F w : R m! R n which depends in a differentiable way on a vector of weights. Example: Image classification, as in: cat or dog? 0.7 F w = R 2 R p cat =0.7 p dog =0.3

5 bars) were given four handicap stones (that is, free moves at the start of every game) against all opponents. Programs were evaluated on an Elo scale37: a 230 point gap corresponds to a 79% probability of winning, which roughly corresponds to one amateur dan rank advantage on KGS38; an approximate correspondence to human ranks is also shown, Intro to deep learning better in lphago than a value function vθ(s ) v pσ(s ) derived from the deep neural network (DNN) is a particular kind of function SL policy network. Evaluating policy and value networks requires several orders of m n traditional search heuristics. To magnitude more computation than F : R! R w efficiently combine MCTS with deep neural networks, lphago uses an asynchronous multi-threaded search that executes simulations on which depends in a differentiable way onnetworks a vector of weights. CPUs, and computes policy and value in parallel on GPUs. The final version of lphago used 40 search threads, 48 CPUs, and 8 GPUs. We also implemented a distributed version of lphago that Example: Playing the game of Go a Value network explo 176 G and d Eval To ev of l comm sourc b Tree evaluation from 2R Fw (current board) = comb does n with s distri R19 19 d Policy network e Percentage g of simul

6 Intro to deep learning deep neural network (DNN) is a particular kind of function F w : R m! R n which depends in a differentiable way on a vector of weights. randomly initialised neural network is trained on sample data {(x i,y i )} i2i,x i 2 R m,y i 2 R n by varying the weight vector in order to minimise the error E w = X i (F w (x i ) y i ) 2

7 training process (gradient descent) E w initial network w 1 w 2 optimal network

8 Intro to deep learning DNN is made up of many layers of neurons connecting the input (on the left) to the output (on the right) as in the following diagram: x 1 y 1 w 1j w 11 x j y i w ij y i = X w ij x j (x) = 1 2 (x + x ) j

9 Intro to deep learning DNN is made up of many layers of neurons connecting the input (on the left) to the output (on the right) as in the following diagram: (k) (k + 1) w (k) 11 x 1 y 1 w (k) 1j x j w (k) ij y i y i = X j w (k) ij x j (x) = 1 2 (x + x )

10 Intro to deep learning DNN is made up of many layers of neurons connecting the input (on the left) to the output (on the right) as in the following diagram: (k) (k + 1) w (k) 11 x 1 y 1 w (k) 1j x j w (k) ij y i y i = X j w (k) ij x j ~y = W (k) ~x (x) = 1 2 (x + x ) ~x =(x 1,x 2,...,x j,...) T W (k) =(w (k) ij ) i,j

11 Intro to deep learning DNN is made up of many layers of neurons connecting the input (on the left) to the output (on the right) as in the following diagram: x 1 (0) (L + 1) x j x m F w : R m! R n F w (~x) = W (L) W (1) W (0) ~x

12 pplied deep learning Natural Language Processing (NLP) includes machine translation, processing of natural language queries (e.g. Siri) and question answering wrt. unstructured text (e.g. where is the ring at the end of LoTR? ). Two common neural network architectures for NLP are Recurrent Neural Networks (RNN) and Long-Short Term Memory (LSTM). oth are variations on the DNN architecture. Many hard unsolved NLP problems require sophisticated reasoning about context and an understanding of algorithms. To solve these problems, recent work equips RNNs/LSTMs with Von Neumann elements e.g. stack memory. RNN/LSTM controllers + differentiable memory is one model of differentiable programming, currently being explored at e.g. Google, Facebook,

13 These memory enhanced neural networks are already being used in production, e.g. Facebook messenger (~900m users). Salon, Facebook Thinks It Has Found the Secret to Making ots Less Dumb, June

14 What is differentiable programming? The idea is that a differentiable program P w : I! O should depend smoothly on weights, and, by varying the weights, one would learn a program with desired properties. Question: what if instead of coupling neural net controllers to imperative algorithmic elements (e.g. memory) we couple a neural net to functional algorithmic elements? Problem: how to make an abstract functional language like Lambda calculus, System F, etc. differentiable? Our idea: use categorical semantics in a category of differentiable mathematical objects compatible with neural networks (e.g. matrices of polynomials).

15 Linear logic and semantics Discovered by Girard in the 1980s, linear logic is a substructural logic with contraction and weakening available only for formulas marked with a new connective! (called the exponential). The usual connectives of logic (e.g. conjunction, implication) are decomposed into! together with a linearised version of that connective (called resp. tensor, linear implication ). ( Under the Curry-Howard correspondence, linear logic corresponds to a programming language with resource management and symmetric monoidal categories equipped with a special kind of comonad.

16 Linear logic and semantics variables:,,,... formulas:!f, F F 0,F ( F 0,F&F 0, 8 F, constants int = 8!( ( ) ( ( ( ) bint = 8!( ( ) (!( ( ) ( ( ( )

17 ! all deduction rules, the sets and may be empty and, in particular, the promotion C (xiom): for a list of ` may be used with an empty premise. In the promotion rule,! stands ulas each of which is preceeded by an exponential modality, for examplelogic! 1,...,!n. Deduction rules for linear 0 0 ( ` ` `,, ` diagrams on the right are string diagrams and should be ignored until Section 5. In -R (Right ): (4.5),,, `C cut C (Cut): 0, `! ` -L (4.6) ):trees associated prom,, ` (4.9) (Promotion): cular they are(left not the,! `!, ` to C proofs.! (xiom): ` ` 0,, (Left (): 0,, (, (Left ): (Cut): `C (L `C (Promotion):,,, ` C,, 0 ` C,, ` (Dereliction):,,, ` C,,!, `` (Exchange): (Right ():,,, `,, ` cut C (Exchange): (4.2) 0,, ` (4.8)! `! 0 der (R ` C `(Right ( ): -L! `! prom C (Right ): ( ( 0 ` `, ` -R,,,,,, `C `C 0 ( (4.9) C (4.6)! (4.10) 0 `,, ` C! (4.3) ` (Left`(): -R 0,, (, `C ( L (4.7), ` (! C ( 8 Definition 7.5 (Second-order linear logic). The Definition 7.5 (Second-order linear logic). The formulas of second-order linear logic, formula `,, ` recursively! `,,, ` C are defined as follows: any pr prom weak of (Weakening): (4.9) are defined recursively as follows: any formula of (Left propositional linear logic is -L a formula, (Promotion): der (4.10 (Dereliction): ):,!, 0 `! `!,!, `,, ` C and if is a formula then so is 8x. for any proposi and if is a formula then so is 8x. for any propositional variable x.there are two new C deduction rules: deduction rules:, ` (4.7) (R (Right ():!! ```(8R, [/x] ` C `,!,!,, 8R 8L (4.11) (Contraction): ` 8x. ctr, 8x ` 8x.,!, `,,!, ` C. ` C, 8 -L ( (Left ):, empty,, `and C in the right introduction rule we ( where is a sequence of formulas,8possibly a is a sequence of formulas, possibly empty, require that x is not free in any formula of where. Here [/x] means a formula with all, `! 1-L (Left 1):! ` prom that x is not free in any formula of. Here free occurrences of x replaced by a formula require (as usual, there is some chicanery necessary (Promotion):, 1, `,!,!, `! `! ctr (4.11 (Contraction):

18 inary integers bint = 8!( ( ) (!( ( ) ( ( ( ) S 2 {0, 1} 7! proof t S of ` bint ` ` ( L `, ( ` ( L `, (, ( ` ( L, (, (, ( ` ( R (, (, ( ` ( der t 001!( ( ),!( ( ),!( ( ) ` ( ctr!( ( ),!( ( ) ` (!( ( ) `!( ( ) ( ( ( ) ( R `!( ( ) (!( ( ) ( ( ( ) ( R

19 Polynomial semantics (sketch) J K = R d J ( K = L(R d, R d )=M d (R) J!( ( )K = R[{a ij } 1applei,jappled ]=R[a 11,...,a ij,...,a dd ] (one variable per entry in a d x d matrix) JbintK = J!( ( ) (!( ( ) ( ( ( ) K = M d R[{a ij,a 0 ij} 1applei,jappled ] (matrices of polynomials in the variables a, a )

20 Polynomial semantics (sketch) JbintK = M d R[{a ij,a 0 ij} 1applei,jappled ] (matrices of polynomials in the variables a, a ) S 2 {0, 1} t S : bint, Jt S K 2 JbintK Jt 0 K = 0 a 11 a a 21 a 2d a d1 a dd 1 C Jt 1K = 0 a 0 11 a 0 a 0 21 a 0 2d a 0 d1 a 0 dd 1 C Jt 001 K = Jt 0 K Jt 0 K Jt 1 K

21 Polynomial semantics (sketch) JbintK = M d R[{a ij,a 0 ij} 1applei,jappled ] (matrices of polynomials in the variables a, a ) t S : bint, Jt S K 2 JbintK vector space Jt 0 K Jt 1 K Jt 001 K JbintK Upshot: proofs/programs to vectors

22 Polynomial semantics (sketch) T 2 JbintK = M d R[{a ij,a 0 ij} 1applei,jappled ] (matrices of polynomials in the variables a, a ) eval T (, ) =T aij = ij,a 0 ij = ij, 2 M d (R) Jt 0 K Jt 1 K T Jt 001 K JbintK Upshot: proofs/programs to vectors

23 Polynomial semantics (sketch) T 2 JbintK = M d R[{a ij,a 0 ij} 1applei,jappled ] (matrices of polynomials in the variables a, a ) eval Jt0 K(, ) = eval Jt001 K(, ) = Jt 0 K Jt 1 K T Jt 001 K JbintK Upshot: proofs/programs to vectors

24 ... > ' Our model: RNN controller + differentiable linear logic module H ' ) Q 7 i H ) S ) - : eval,h( TMH ), ; Q QT - ' i hit tit Y Tlt ) U : > " ; : ;Ct ) T (t) 2 JbintK H, S, U, Q,Q,Q T are matrices of weights (t) = h (t) = S eval T (t)( (t), (t) )+Hh (t 1) + Ux (t) Q h (t 1) (t) = Q h (t 1) T (t) = (t Q T h 1)

Stratifications and complexity in linear logic. Daniel Murfet

Stratifications and complexity in linear logic. Daniel Murfet Stratifications and complexity in linear logic Daniel Murfet Curry-Howard correspondence logic programming categories formula type objects sequent input/output spec proof program morphisms cut-elimination

More information

Notes on proof-nets. Daniel Murfet. 8/4/2016 therisingsea.org/post/seminar-proofnets/

Notes on proof-nets. Daniel Murfet. 8/4/2016 therisingsea.org/post/seminar-proofnets/ Notes on proof-nets Daniel Murfet 8/4/2016 therisingseaorg/post/seminar-proofnets/ Why proof-nets Girard Proof-nets: the parallel syntax for proof-theory The formulas of second order unit-free multiplicative

More information

Deep Reinforcement Learning

Deep Reinforcement Learning Martin Matyášek Artificial Intelligence Center Czech Technical University in Prague October 27, 2016 Martin Matyášek VPD, 2016 1 / 50 Reinforcement Learning in a picture R. S. Sutton and A. G. Barto 2015

More information

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages

Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Harvard School of Engineering and Applied Sciences CS 152: Programming Languages Lecture 17 Tuesday, April 2, 2013 1 There is a strong connection between types in programming languages and propositions

More information

Recurrent Neural Networks. Jian Tang

Recurrent Neural Networks. Jian Tang Recurrent Neural Networks Jian Tang tangjianpku@gmail.com 1 RNN: Recurrent neural networks Neural networks for sequence modeling Summarize a sequence with fix-sized vector through recursively updating

More information

Deep Learning Architectures and Algorithms

Deep Learning Architectures and Algorithms Deep Learning Architectures and Algorithms In-Jung Kim 2016. 12. 2. Agenda Introduction to Deep Learning RBM and Auto-Encoders Convolutional Neural Networks Recurrent Neural Networks Reinforcement Learning

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Cyber Rodent Project Some slides from: David Silver, Radford Neal CSC411: Machine Learning and Data Mining, Winter 2017 Michael Guerzhoy 1 Reinforcement Learning Supervised learning:

More information

CS885 Reinforcement Learning Lecture 7a: May 23, 2018

CS885 Reinforcement Learning Lecture 7a: May 23, 2018 CS885 Reinforcement Learning Lecture 7a: May 23, 2018 Policy Gradient Methods [SutBar] Sec. 13.1-13.3, 13.7 [SigBuf] Sec. 5.1-5.2, [RusNor] Sec. 21.5 CS885 Spring 2018 Pascal Poupart 1 Outline Stochastic

More information

Deep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017

Deep Reinforcement Learning. STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017 Deep Reinforcement Learning STAT946 Deep Learning Guest Lecture by Pascal Poupart University of Waterloo October 19, 2017 Outline Introduction to Reinforcement Learning AlphaGo (Deep RL for Computer Go)

More information

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen

Artificial Neural Networks. Introduction to Computational Neuroscience Tambet Matiisen Artificial Neural Networks Introduction to Computational Neuroscience Tambet Matiisen 2.04.2018 Artificial neural network NB! Inspired by biology, not based on biology! Applications Automatic speech recognition

More information

Neural Networks 2. 2 Receptive fields and dealing with image inputs

Neural Networks 2. 2 Receptive fields and dealing with image inputs CS 446 Machine Learning Fall 2016 Oct 04, 2016 Neural Networks 2 Professor: Dan Roth Scribe: C. Cheng, C. Cervantes Overview Convolutional Neural Networks Recurrent Neural Networks 1 Introduction There

More information

ECE521 Lecture 7/8. Logistic Regression

ECE521 Lecture 7/8. Logistic Regression ECE521 Lecture 7/8 Logistic Regression Outline Logistic regression (Continue) A single neuron Learning neural networks Multi-class classification 2 Logistic regression The output of a logistic regression

More information

Recurrent Neural Networks Deep Learning Lecture 5. Efstratios Gavves

Recurrent Neural Networks Deep Learning Lecture 5. Efstratios Gavves Recurrent Neural Networks Deep Learning Lecture 5 Efstratios Gavves Sequential Data So far, all tasks assumed stationary data Neither all data, nor all tasks are stationary though Sequential Data: Text

More information

Lecture 17: Neural Networks and Deep Learning

Lecture 17: Neural Networks and Deep Learning UVA CS 6316 / CS 4501-004 Machine Learning Fall 2016 Lecture 17: Neural Networks and Deep Learning Jack Lanchantin Dr. Yanjun Qi 1 Neurons 1-Layer Neural Network Multi-layer Neural Network Loss Functions

More information

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning

Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Apprentissage, réseaux de neurones et modèles graphiques (RCP209) Neural Networks and Deep Learning Nicolas Thome Prenom.Nom@cnam.fr http://cedric.cnam.fr/vertigo/cours/ml2/ Département Informatique Conservatoire

More information

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6

Machine Learning for Large-Scale Data Analysis and Decision Making A. Neural Networks Week #6 Machine Learning for Large-Scale Data Analysis and Decision Making 80-629-17A Neural Networks Week #6 Today Neural Networks A. Modeling B. Fitting C. Deep neural networks Today s material is (adapted)

More information

If Mathematical Proof is a Game, What are the States and Moves? David McAllester

If Mathematical Proof is a Game, What are the States and Moves? David McAllester If Mathematical Proof is a Game, What are the States and Moves? David McAllester 1 AlphaGo Fan (October 2015) AlphaGo Defeats Fan Hui, European Go Champion. 2 AlphaGo Lee (March 2016) 3 AlphaGo Zero vs.

More information

Recurrent Neural Networks. COMP-550 Oct 5, 2017

Recurrent Neural Networks. COMP-550 Oct 5, 2017 Recurrent Neural Networks COMP-550 Oct 5, 2017 Outline Introduction to neural networks and deep learning Feedforward neural networks Recurrent neural networks 2 Classification Review y = f( x) output label

More information

NEURAL LANGUAGE MODELS

NEURAL LANGUAGE MODELS COMP90042 LECTURE 14 NEURAL LANGUAGE MODELS LANGUAGE MODELS Assign a probability to a sequence of words Framed as sliding a window over the sentence, predicting each word from finite context to left E.g.,

More information

Resource lambda-calculus: the differential viewpoint

Resource lambda-calculus: the differential viewpoint Resource lambda-calculus: the differential viewpoint Preuves, Programmes et Systèmes (PPS) Université Paris Diderot and CNRS September 13, 2011 History ( 1985): Girard Girard had differential intuitions

More information

An overview of Structural Proof Theory and Computing

An overview of Structural Proof Theory and Computing An overview of Structural Proof Theory and Computing Dale Miller INRIA-Saclay & LIX, École Polytechnique Palaiseau, France Madison, Wisconsin, 2 April 2012 Part of the Special Session in Structural Proof

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor University of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

Deep Learning For Mathematical Functions

Deep Learning For Mathematical Functions 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Neural Architectures for Image, Language, and Speech Processing

Neural Architectures for Image, Language, and Speech Processing Neural Architectures for Image, Language, and Speech Processing Karl Stratos June 26, 2018 1 / 31 Overview Feedforward Networks Need for Specialized Architectures Convolutional Neural Networks (CNNs) Recurrent

More information

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask

Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask Set 6: Knowledge Representation: The Propositional Calculus Chapter 7 R&N ICS 271 Fall 2017 Kalev Kask Outline Representing knowledge using logic Agent that reason logically A knowledge based agent Representing

More information

CSCI 315: Artificial Intelligence through Deep Learning

CSCI 315: Artificial Intelligence through Deep Learning CSCI 315: Artificial Intelligence through Deep Learning W&L Winter Term 2017 Prof. Levy Recurrent Neural Networks (Chapter 7) Recall our first-week discussion... How do we know stuff? (MIT Press 1996)

More information

Programming Languages in String Diagrams. [ one ] String Diagrams. Paul-André Melliès. Oregon Summer School in Programming Languages June 2011

Programming Languages in String Diagrams. [ one ] String Diagrams. Paul-André Melliès. Oregon Summer School in Programming Languages June 2011 Programming Languages in String Diagrams [ one ] String Diagrams Paul-ndré Melliès Oregon Summer School in Programming Languages June 2011 String diagrams diagrammatic account o logic and programming 2

More information

@SoyGema GEMA PARREÑO PIQUERAS

@SoyGema GEMA PARREÑO PIQUERAS @SoyGema GEMA PARREÑO PIQUERAS WHAT IS AN ARTIFICIAL NEURON? WHAT IS AN ARTIFICIAL NEURON? Image Recognition Classification using Softmax Regressions and Convolutional Neural Networks Languaje Understanding

More information

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer

CSE446: Neural Networks Spring Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer CSE446: Neural Networks Spring 2017 Many slides are adapted from Carlos Guestrin and Luke Zettlemoyer Human Neurons Switching time ~ 0.001 second Number of neurons 10 10 Connections per neuron 10 4-5 Scene

More information

Lecture 11 Recurrent Neural Networks I

Lecture 11 Recurrent Neural Networks I Lecture 11 Recurrent Neural Networks I CMSC 35246: Deep Learning Shubhendu Trivedi & Risi Kondor niversity of Chicago May 01, 2017 Introduction Sequence Learning with Neural Networks Some Sequence Tasks

More information

Deep Learning for NLP

Deep Learning for NLP Deep Learning for NLP Instructor: Wei Xu Ohio State University CSE 5525 Many slides from Greg Durrett Outline Motivation for neural networks Feedforward neural networks Applying feedforward neural networks

More information

Advanced Topics in LP and FP

Advanced Topics in LP and FP Lecture 1: Prolog and Summary of this lecture 1 Introduction to Prolog 2 3 Truth value evaluation 4 Prolog Logic programming language Introduction to Prolog Introduced in the 1970s Program = collection

More information

arxiv: v3 [cs.lg] 14 Jan 2018

arxiv: v3 [cs.lg] 14 Jan 2018 A Gentle Tutorial of Recurrent Neural Network with Error Backpropagation Gang Chen Department of Computer Science and Engineering, SUNY at Buffalo arxiv:1610.02583v3 [cs.lg] 14 Jan 2018 1 abstract We describe

More information

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino

Artificial Neural Networks D B M G. Data Base and Data Mining Group of Politecnico di Torino. Elena Baralis. Politecnico di Torino Artificial Neural Networks Data Base and Data Mining Group of Politecnico di Torino Elena Baralis Politecnico di Torino Artificial Neural Networks Inspired to the structure of the human brain Neurons as

More information

Long-Short Term Memory and Other Gated RNNs

Long-Short Term Memory and Other Gated RNNs Long-Short Term Memory and Other Gated RNNs Sargur Srihari srihari@buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 Topics in Sequence Modeling

More information

Sequence Modeling with Neural Networks

Sequence Modeling with Neural Networks Sequence Modeling with Neural Networks Harini Suresh y 0 y 1 y 2 s 0 s 1 s 2... x 0 x 1 x 2 hat is a sequence? This morning I took the dog for a walk. sentence medical signals speech waveform Successes

More information

David Silver, Google DeepMind

David Silver, Google DeepMind Tutorial: Deep Reinforcement Learning David Silver, Google DeepMind Outline Introduction to Deep Learning Introduction to Reinforcement Learning Value-Based Deep RL Policy-Based Deep RL Model-Based Deep

More information

A Dichotomy. in in Probabilistic Databases. Joint work with Robert Fink. for Non-Repeating Queries with Negation Queries with Negation

A Dichotomy. in in Probabilistic Databases. Joint work with Robert Fink. for Non-Repeating Queries with Negation Queries with Negation Dichotomy for Non-Repeating Queries with Negation Queries with Negation in in Probabilistic Databases Robert Dan Olteanu Fink and Dan Olteanu Joint work with Robert Fink Uncertainty in Computation Simons

More information

Neural Networks Language Models

Neural Networks Language Models Neural Networks Language Models Philipp Koehn 10 October 2017 N-Gram Backoff Language Model 1 Previously, we approximated... by applying the chain rule p(w ) = p(w 1, w 2,..., w n ) p(w ) = i p(w i w 1,...,

More information

AN ALTERNATIVE NATURAL DEDUCTION FOR THE INTUITIONISTIC PROPOSITIONAL LOGIC

AN ALTERNATIVE NATURAL DEDUCTION FOR THE INTUITIONISTIC PROPOSITIONAL LOGIC Bulletin of the Section of Logic Volume 45/1 (2016), pp 33 51 http://dxdoiorg/1018778/0138-068045103 Mirjana Ilić 1 AN ALTERNATIVE NATURAL DEDUCTION FOR THE INTUITIONISTIC PROPOSITIONAL LOGIC Abstract

More information

Quantum groupoids and logical dualities

Quantum groupoids and logical dualities Quantum groupoids and logical dualities (work in progress) Paul-André Melliès CNS, Université Paris Denis Diderot Categories, ogic and Foundations of Physics ondon 14 May 2008 1 Proof-knots Aim: formulate

More information

Propositional Logic: Logical Agents (Part I)

Propositional Logic: Logical Agents (Part I) Propositional Logic: Logical Agents (Part I) This lecture topic: Propositional Logic (two lectures) Chapter 7.1-7.4 (this lecture, Part I) Chapter 7.5 (next lecture, Part II) Next lecture topic: First-order

More information

Making Deep Learning Understandable for Analyzing Sequential Data about Gene Regulation

Making Deep Learning Understandable for Analyzing Sequential Data about Gene Regulation Making Deep Learning Understandable for Analyzing Sequential Data about Gene Regulation Dr. Yanjun Qi Department of Computer Science University of Virginia Tutorial @ ACM BCB-2018 8/29/18 Yanjun Qi / UVA

More information

Relational semantics for a fragment of linear logic

Relational semantics for a fragment of linear logic Relational semantics for a fragment of linear logic Dion Coumans March 4, 2011 Abstract Relational semantics, given by Kripke frames, play an essential role in the study of modal and intuitionistic logic.

More information

Google s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

Google s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation Google s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation Y. Wu, M. Schuster, Z. Chen, Q.V. Le, M. Norouzi, et al. Google arxiv:1609.08144v2 Reviewed by : Bill

More information

Algebra and Artificial Intelligence

Algebra and Artificial Intelligence Algebra and Artificial Intelligence Daniel Murfet 4th May 2018 GitHub: https://github.com/dmurfet/deeplinearlogic J. Clift and D. Murfet Cofree coalgebras and differential linear logic, arxiv: https://arxiv.org/abs/1701.01285.

More information

Derivatives of proofs in linear logic

Derivatives of proofs in linear logic Derivatives pros in linear logic Daniel Murfet based on joint work with James Clift Precis HK interpretation: intuitionistic pros give rise to functions Pros) Pros) Can se functions be differentiated?

More information

Lecture 5 Neural models for NLP

Lecture 5 Neural models for NLP CS546: Machine Learning in NLP (Spring 2018) http://courses.engr.illinois.edu/cs546/ Lecture 5 Neural models for NLP Julia Hockenmaier juliahmr@illinois.edu 3324 Siebel Center Office hours: Tue/Thu 2pm-3pm

More information

Spatial Transformer. Ref: Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transformer Networks, NIPS, 2015

Spatial Transformer. Ref: Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transformer Networks, NIPS, 2015 Spatial Transormer Re: Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transormer Networks, NIPS, 2015 Spatial Transormer Layer CNN is not invariant to scaling and rotation

More information

Sémantique des jeux asynchrones et réécriture 2-dimensionnelle

Sémantique des jeux asynchrones et réécriture 2-dimensionnelle Sémantique des jeux asynchrones et réécriture 2-dimensionnelle Soutenance de thèse de doctorat Samuel Mimram Laboratoire PPS (CNRS Université Paris Diderot) 1 er décembre 2008 1 / 64 A program is a text

More information

Recurrent Neural Networks (Part - 2) Sumit Chopra Facebook

Recurrent Neural Networks (Part - 2) Sumit Chopra Facebook Recurrent Neural Networks (Part - 2) Sumit Chopra Facebook Recap Standard RNNs Training: Backpropagation Through Time (BPTT) Application to sequence modeling Language modeling Applications: Automatic speech

More information

Random Coattention Forest for Question Answering

Random Coattention Forest for Question Answering Random Coattention Forest for Question Answering Jheng-Hao Chen Stanford University jhenghao@stanford.edu Ting-Po Lee Stanford University tingpo@stanford.edu Yi-Chun Chen Stanford University yichunc@stanford.edu

More information

Presented By: Omer Shmueli and Sivan Niv

Presented By: Omer Shmueli and Sivan Niv Deep Speaker: an End-to-End Neural Speaker Embedding System Chao Li, Xiaokong Ma, Bing Jiang, Xiangang Li, Xuewei Zhang, Xiao Liu, Ying Cao, Ajay Kannan, Zhenyao Zhu Presented By: Omer Shmueli and Sivan

More information

Adjoint Logic and Its Concurrent Semantics

Adjoint Logic and Its Concurrent Semantics Adjoint Logic and Its Concurrent Semantics Frank Pfenning ABCD Meeting, Edinburgh, December 18-19, 2017 Joint work with Klaas Pruiksma and William Chargin Outline Proofs as programs Linear sequent proofs

More information

Neural Networks. Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016

Neural Networks. Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016 Neural Networks Yan Shao Department of Linguistics and Philology, Uppsala University 7 December 2016 Outline Part 1 Introduction Feedforward Neural Networks Stochastic Gradient Descent Computational Graph

More information

Approximate Q-Learning. Dan Weld / University of Washington

Approximate Q-Learning. Dan Weld / University of Washington Approximate Q-Learning Dan Weld / University of Washington [Many slides taken from Dan Klein and Pieter Abbeel / CS188 Intro to AI at UC Berkeley materials available at http://ai.berkeley.edu.] Q Learning

More information

The priority promotion approach to parity games

The priority promotion approach to parity games The priority promotion approach to parity games Massimo Benerecetti 1, Daniele Dell Erba 1, and Fabio Mogavero 2 1 Università degli Studi di Napoli Federico II 2 Università degli Studi di Verona Abstract.

More information

Introduction to Deep Neural Networks

Introduction to Deep Neural Networks Introduction to Deep Neural Networks Presenter: Chunyuan Li Pattern Classification and Recognition (ECE 681.01) Duke University April, 2016 Outline 1 Background and Preliminaries Why DNNs? Model: Logistic

More information

in Linear Logic Denis Bechet LIPN - Institut Galilee Universite Paris 13 Avenue Jean-Baptiste Clement Villetaneuse, France Abstract

in Linear Logic Denis Bechet LIPN - Institut Galilee Universite Paris 13 Avenue Jean-Baptiste Clement Villetaneuse, France Abstract Second Order Connectives and roof Transformations in Linear Logic Denis Bechet LIN - Institut Galilee Universite aris 1 Avenue Jean-Baptiste Clement 90 Villetaneuse, France e-mail: dbe@lipnuniv-paris1fr

More information

Logarithmic Space and Permutations

Logarithmic Space and Permutations Logarithmic Space and Permutations Clément Aubert and Thomas Seiller Abstract In a recent work, Girard [6] proposed a new and innovative approach to computational complexity based on the proofs-as-programs

More information

The category of open simply-typed lambda terms

The category of open simply-typed lambda terms The category of open simply-typed lambda terms Daniel Murfet based on joint work with William Troiani o Reminder on category theory Every adjunction gives rise to a monad C F / G D C(Fx,y) = D(x, Gy) C(FGx,x)

More information

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016

Machine Learning for Signal Processing Neural Networks Continue. Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016 Machine Learning for Signal Processing Neural Networks Continue Instructor: Bhiksha Raj Slides by Najim Dehak 1 Dec 2016 1 So what are neural networks?? Voice signal N.Net Transcription Image N.Net Text

More information

Lecture Notes on Inductive Definitions

Lecture Notes on Inductive Definitions Lecture Notes on Inductive Definitions 15-312: Foundations of Programming Languages Frank Pfenning Lecture 2 September 2, 2004 These supplementary notes review the notion of an inductive definition and

More information

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann

Neural Networks with Applications to Vision and Language. Feedforward Networks. Marco Kuhlmann Neural Networks with Applications to Vision and Language Feedforward Networks Marco Kuhlmann Feedforward networks Linear separability x 2 x 2 0 1 0 1 0 0 x 1 1 0 x 1 linearly separable not linearly separable

More information

CS Lecture 19: Logic To Truth through Proof. Prof. Clarkson Fall Today s music: Theme from Sherlock

CS Lecture 19: Logic To Truth through Proof. Prof. Clarkson Fall Today s music: Theme from Sherlock CS 3110 Lecture 19: Logic To Truth through Proof Prof. Clarkson Fall 2014 Today s music: Theme from Sherlock Review Current topic: How to reason about correctness of code Last week: informal arguments

More information

Feature Design. Feature Design. Feature Design. & Deep Learning

Feature Design. Feature Design. Feature Design. & Deep Learning Artificial Intelligence and its applications Lecture 9 & Deep Learning Professor Daniel Yeung danyeung@ieee.org Dr. Patrick Chan patrickchan@ieee.org South China University of Technology, China Appropriately

More information

Learning from Examples

Learning from Examples Learning from Examples Data fitting Decision trees Cross validation Computational learning theory Linear classifiers Neural networks Nonparametric methods: nearest neighbor Support vector machines Ensemble

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Neural networks Daniel Hennes 21.01.2018 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Logistic regression Neural networks Perceptron

More information

Deep Learning for Natural Language Processing. Sidharth Mudgal April 4, 2017

Deep Learning for Natural Language Processing. Sidharth Mudgal April 4, 2017 Deep Learning for Natural Language Processing Sidharth Mudgal April 4, 2017 Table of contents 1. Intro 2. Word Vectors 3. Word2Vec 4. Char Level Word Embeddings 5. Application: Entity Matching 6. Conclusion

More information

Lecture 11: Measuring the Complexity of Proofs

Lecture 11: Measuring the Complexity of Proofs IAS/PCMI Summer Session 2000 Clay Mathematics Undergraduate Program Advanced Course on Computational Complexity Lecture 11: Measuring the Complexity of Proofs David Mix Barrington and Alexis Maciel July

More information

Chapter 9: The Perceptron

Chapter 9: The Perceptron Chapter 9: The Perceptron 9.1 INTRODUCTION At this point in the book, we have completed all of the exercises that we are going to do with the James program. These exercises have shown that distributed

More information

ECE521 Lectures 9 Fully Connected Neural Networks

ECE521 Lectures 9 Fully Connected Neural Networks ECE521 Lectures 9 Fully Connected Neural Networks Outline Multi-class classification Learning multi-layer neural networks 2 Measuring distance in probability space We learnt that the squared L2 distance

More information

Modeling Preferences with Formal Concept Analysis

Modeling Preferences with Formal Concept Analysis Modeling Preferences with Formal Concept Analysis Sergei Obiedkov Higher School of Economics, Moscow, Russia John likes strawberries more than apples. John likes raspberries more than pears. Can we generalize?

More information

Lecture Notes on Classical Linear Logic

Lecture Notes on Classical Linear Logic Lecture Notes on Classical Linear Logic 15-816: Linear Logic Frank Pfenning Lecture 25 April 23, 2012 Originally, linear logic was conceived by Girard [Gir87] as a classical system, with one-sided sequents,

More information

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS

CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS CS 179: LECTURE 16 MODEL COMPLEXITY, REGULARIZATION, AND CONVOLUTIONAL NETS LAST TIME Intro to cudnn Deep neural nets using cublas and cudnn TODAY Building a better model for image classification Overfitting

More information

Deep reinforcement learning. Dialogue Systems Group, Cambridge University Engineering Department

Deep reinforcement learning. Dialogue Systems Group, Cambridge University Engineering Department Deep reinforcement learning Milica Gašić Dialogue Systems Group, Cambridge University Engineering Department 1 / 25 In this lecture... Introduction to deep reinforcement learning Value-based Deep RL Deep

More information

On relational interpretation of multi-modal categorial logics

On relational interpretation of multi-modal categorial logics Gerhard Jäger 1 On relational interpretation of multi-modal categorial logics Gerhard Jäger Gerhard.Jaeger@let.uu.nl Utrecht Institute of Linguistics (OTS) September 27, 2001 Gerhard Jäger 2 1 Outline

More information

A completeness theorem for symmetric product phase spaces

A completeness theorem for symmetric product phase spaces A completeness theorem for symmetric product phase spaces Thomas Ehrhard Fédération de Recherche des Unités de Mathématiques de Marseille CNRS FR 2291 Institut de Mathématiques de Luminy CNRS UPR 9016

More information

Deep Learning for Natural Language Processing

Deep Learning for Natural Language Processing Deep Learning for Natural Language Processing Dylan Drover, Borui Ye, Jie Peng University of Waterloo djdrover@uwaterloo.ca borui.ye@uwaterloo.ca July 8, 2015 Dylan Drover, Borui Ye, Jie Peng (University

More information

Embeddings Learned By Matrix Factorization

Embeddings Learned By Matrix Factorization Embeddings Learned By Matrix Factorization Benjamin Roth; Folien von Hinrich Schütze Center for Information and Language Processing, LMU Munich Overview WordSpace limitations LinAlgebra review Input matrix

More information

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści

Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, Spis treści Deep learning / Ian Goodfellow, Yoshua Bengio and Aaron Courville. - Cambridge, MA ; London, 2017 Spis treści Website Acknowledgments Notation xiii xv xix 1 Introduction 1 1.1 Who Should Read This Book?

More information

WaveNet: A Generative Model for Raw Audio

WaveNet: A Generative Model for Raw Audio WaveNet: A Generative Model for Raw Audio Ido Guy & Daniel Brodeski Deep Learning Seminar 2017 TAU Outline Introduction WaveNet Experiments Introduction WaveNet is a deep generative model of raw audio

More information

Type Systems. Lecture 9: Classical Logic. Neel Krishnaswami University of Cambridge

Type Systems. Lecture 9: Classical Logic. Neel Krishnaswami University of Cambridge Type Systems Lecture 9: Classical Logic Neel Krishnaswami University of Cambridge Where We Are We have seen the Curry Howard correspondence: Intuitionistic propositional logic Simply-typed lambda calculus

More information

dataflow matrix machines

dataflow matrix machines for dataflow matrix machines Michael Bukatin HERE North America LLC, Burlington, MA Joint work with Jon Anthony - - - 51st Spring Topology and Dynamical Systems Conference, Special Session on Topology

More information

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Language Models. Tobias Scheffer

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen. Language Models. Tobias Scheffer Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen Language Models Tobias Scheffer Stochastic Language Models A stochastic language model is a probability distribution over words.

More information

Title: Logical Agents AIMA: Chapter 7 (Sections 7.4 and 7.5)

Title: Logical Agents AIMA: Chapter 7 (Sections 7.4 and 7.5) B.Y. Choueiry 1 Instructor s notes #12 Title: Logical Agents AIMA: Chapter 7 (Sections 7.4 and 7.5) Introduction to Artificial Intelligence CSCE 476-876, Fall 2018 URL: www.cse.unl.edu/ choueiry/f18-476-876

More information

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study

CS 287: Advanced Robotics Fall Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study CS 287: Advanced Robotics Fall 2009 Lecture 14: Reinforcement Learning with Function Approximation and TD Gammon case study Pieter Abbeel UC Berkeley EECS Assignment #1 Roll-out: nice example paper: X.

More information

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics

Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics Synaptic Devices and Neuron Circuits for Neuron-Inspired NanoElectronics Byung-Gook Park Inter-university Semiconductor Research Center & Department of Electrical and Computer Engineering Seoul National

More information

Deep Learning Recurrent Networks 2/28/2018

Deep Learning Recurrent Networks 2/28/2018 Deep Learning Recurrent Networks /8/8 Recap: Recurrent networks can be incredibly effective Story so far Y(t+) Stock vector X(t) X(t+) X(t+) X(t+) X(t+) X(t+5) X(t+) X(t+7) Iterated structures are good

More information

Multiple Conclusion Linear Logic: Cut-elimination and more

Multiple Conclusion Linear Logic: Cut-elimination and more Multiple Conclusion Linear Logic: Cut-elimination and more Harley Eades III Augusta University Valeria de Paiva Nuance Communications LFCS 2015 Full Intuitionistic linear Logic (FILL): Cut Elimination

More information

Top Down Intro to Neural Semantic Parsing. Ofir

Top Down Intro to Neural Semantic Parsing. Ofir Top Down Intro to Neural Semantic Parsing Ofir Plan 1. Intro to NNs and RNNs Short! 2. Model 1: seq2seq Meaty! 3. Model 2: seq2tree 4. How to improve these models with attention Important! 5. Results &

More information

Propositional Logic: Logical Agents (Part I)

Propositional Logic: Logical Agents (Part I) Propositional Logic: Logical Agents (Part I) First Lecture Today (Tue 21 Jun) Read Chapters 1 and 2 Second Lecture Today (Tue 21 Jun) Read Chapter 7.1-7.4 Next Lecture (Thu 23 Jun) Read Chapters 7.5 (optional:

More information

From perceptrons to word embeddings. Simon Šuster University of Groningen

From perceptrons to word embeddings. Simon Šuster University of Groningen From perceptrons to word embeddings Simon Šuster University of Groningen Outline A basic computational unit Weighting some input to produce an output: classification Perceptron Classify tweets Written

More information

Learning to translate with neural networks. Michael Auli

Learning to translate with neural networks. Michael Auli Learning to translate with neural networks Michael Auli 1 Neural networks for text processing Similar words near each other France Spain dog cat Neural networks for text processing Similar words near each

More information

A Polynomial Time Algorithm for Parsing with the Bounded Order Lambek Calculus

A Polynomial Time Algorithm for Parsing with the Bounded Order Lambek Calculus A Polynomial Time Algorithm for Parsing with the Bounded Order Lambek Calculus Timothy A. D. Fowler Department of Computer Science University of Toronto 10 King s College Rd., Toronto, ON, M5S 3G4, Canada

More information

Part A. P (w 1 )P (w 2 w 1 )P (w 3 w 1 w 2 ) P (w M w 1 w 2 w M 1 ) P (w 1 )P (w 2 w 1 )P (w 3 w 2 ) P (w M w M 1 )

Part A. P (w 1 )P (w 2 w 1 )P (w 3 w 1 w 2 ) P (w M w 1 w 2 w M 1 ) P (w 1 )P (w 2 w 1 )P (w 3 w 2 ) P (w M w M 1 ) Part A 1. A Markov chain is a discrete-time stochastic process, defined by a set of states, a set of transition probabilities (between states), and a set of initial state probabilities; the process proceeds

More information

Development of a Deep Recurrent Neural Network Controller for Flight Applications

Development of a Deep Recurrent Neural Network Controller for Flight Applications Development of a Deep Recurrent Neural Network Controller for Flight Applications American Control Conference (ACC) May 26, 2017 Scott A. Nivison Pramod P. Khargonekar Department of Electrical and Computer

More information

Non-classical Logics: Theory, Applications and Tools

Non-classical Logics: Theory, Applications and Tools Non-classical Logics: Theory, Applications and Tools Agata Ciabattoni Vienna University of Technology (TUV) Joint work with (TUV): M. Baaz, P. Baldi, B. Lellmann, R. Ramanayake,... N. Galatos (US), G.

More information

Neural networks. Chapter 20. Chapter 20 1

Neural networks. Chapter 20. Chapter 20 1 Neural networks Chapter 20 Chapter 20 1 Outline Brains Neural networks Perceptrons Multilayer networks Applications of neural networks Chapter 20 2 Brains 10 11 neurons of > 20 types, 10 14 synapses, 1ms

More information

A summary of Deep Learning without Poor Local Minima

A summary of Deep Learning without Poor Local Minima A summary of Deep Learning without Poor Local Minima by Kenji Kawaguchi MIT oral presentation at NIPS 2016 Learning Supervised (or Predictive) learning Learn a mapping from inputs x to outputs y, given

More information