A Bayesian model for event-based trust

Similar documents
Computational Trust. Perspectives in Concurrency Theory Chennai December Mogens Nielsen University of Aarhus DK

Computational Trust. SFM 11 Bertinoro June Mogens Nielsen University of Aarhus DK. Aarhus Graduate School of Science Mogens Nielsen

Modeling Environment

MULTINOMIAL AGENT S TRUST MODELING USING ENTROPY OF THE DIRICHLET DISTRIBUTION

An analysis of the exponential decay principle in probabilistic trust models

On the Formal Modelling of Trust in Reputation-Based Systems

CSC321 Lecture 18: Learning Probabilistic Models

CS 361: Probability & Statistics

Computational Biology Lecture #3: Probability and Statistics. Bud Mishra Professor of Computer Science, Mathematics, & Cell Biology Sept

Bayesian Models in Machine Learning

Parametric Models. Dr. Shuang LIANG. School of Software Engineering TongJi University Fall, 2012

CS 446 Machine Learning Fall 2016 Nov 01, Bayesian Learning

CS 361: Probability & Statistics

CS 630 Basic Probability and Information Theory. Tim Campbell

Outline. Supervised Learning. Hong Chang. Institute of Computing Technology, Chinese Academy of Sciences. Machine Learning Methods (Fall 2012)

Introduc)on to Bayesian Methods

Introduction to Bayesian Statistics

MACHINE LEARNING INTRODUCTION: STRING CLASSIFICATION

COMS 4771 Probabilistic Reasoning via Graphical Models. Nakul Verma

Fundamentals. CS 281A: Statistical Learning Theory. Yangqing Jia. August, Based on tutorial slides by Lester Mackey and Ariel Kleiner

Statistical learning. Chapter 20, Sections 1 3 1

Algorithm-Independent Learning Issues

Predictive Processing in Planning:

CS540 Machine learning L9 Bayesian statistics

Bayesian Networks Basic and simple graphs

Learning Bayesian network : Given structure and completely observed data

Computational Cognitive Science

Statistics 3858 : Maximum Likelihood Estimators

Lecture 4. Generative Models for Discrete Data - Part 3. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza.

Approximate Inference

Computational Cognitive Science

Review: Probability. BM1: Advanced Natural Language Processing. University of Potsdam. Tatjana Scheffler

Lecture 4 How to Forecast

Learning Sequence Motif Models Using Expectation Maximization (EM) and Gibbs Sampling

The Monte Carlo Method: Bayesian Networks

Group Sequential Tests for Delayed Responses. Christopher Jennison. Lisa Hampson. Workshop on Special Topics on Sequential Methodology

Introduction: MLE, MAP, Bayesian reasoning (28/8/13)

Outline. Binomial, Multinomial, Normal, Beta, Dirichlet. Posterior mean, MAP, credible interval, posterior distribution

Outline. CSE 573: Artificial Intelligence Autumn Agent. Partial Observability. Markov Decision Process (MDP) 10/31/2012

an introduction to bayesian inference

Discrete Bayes. Robert M. Haralick. Computer Science, Graduate Center City University of New York

SYDE 372 Introduction to Pattern Recognition. Probability Measures for Classification: Part I

Probabilistic modeling. The slides are closely adapted from Subhransu Maji s slides

(1) Introduction to Bayesian statistics

Bayesian Social Learning with Random Decision Making in Sequential Systems

Introduction to Machine Learning. Maximum Likelihood and Bayesian Inference. Lecturers: Eran Halperin, Lior Wolf

Grundlagen der Künstlichen Intelligenz

Estimation of reliability parameters from Experimental data (Parte 2) Prof. Enrico Zio

COMP2610/COMP Information Theory

Probabilistic Graphical Models

Lecture 8 Learning Sequence Motif Models Using Expectation Maximization (EM) Colin Dewey February 14, 2008

Introduction to Machine Learning CMU-10701

Statistical learning. Chapter 20, Sections 1 3 1

Intro to Probability. Andrei Barbu

Statistical learning. Chapter 20, Sections 1 4 1

PMR Learning as Inference

Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning

A brief introduction to Conditional Random Fields

DS-GA 1002 Lecture notes 11 Fall Bayesian statistics

Parameter estimation and forecasting. Cristiano Porciani AIfA, Uni-Bonn

Bioinformatics: Biology X

{ p if x = 1 1 p if x = 0

What does Bayes theorem give us? Lets revisit the ball in the box example.

Lecture 10: Introduction to reasoning under uncertainty. Uncertainty

Probability, Entropy, and Inference / More About Inference

ECE 3800 Probabilistic Methods of Signal and System Analysis

Chapter Three. Hypothesis Testing

Machine Learning CMPT 726 Simon Fraser University. Binomial Parameter Estimation

COMP3702/7702 Artificial Intelligence Lecture 11: Introduction to Machine Learning and Reinforcement Learning. Hanna Kurniawati

Some Asymptotic Bayesian Inference (background to Chapter 2 of Tanner s book)

The Bayesian Paradigm

Introduction to Mobile Robotics Probabilistic Robotics

STAT 499/962 Topics in Statistics Bayesian Inference and Decision Theory Jan 2018, Handout 01

Bayesian inference. Fredrik Ronquist and Peter Beerli. October 3, 2007

Probability Theory. Introduction to Probability Theory. Principles of Counting Examples. Principles of Counting. Probability spaces.

The Exciting Guide To Probability Distributions Part 2. Jamie Frost v1.1

13: Variational inference II

Lecture 13 and 14: Bayesian estimation theory

Naïve Bayes classification

Time Series and Dynamic Models

Bayesian Networks BY: MOHAMAD ALSABBAGH

Imprecise Probability

Learning Bayesian networks

COS513 LECTURE 8 STATISTICAL CONCEPTS

CSL302/612 Artificial Intelligence End-Semester Exam 120 Minutes

CSC 2541: Bayesian Methods for Machine Learning

Last Time. Today. Bayesian Learning. The Distributions We Love. CSE 446 Gaussian Naïve Bayes & Logistic Regression

Notes on Mechanism Designy

(4) One-parameter models - Beta/binomial. ST440/550: Applied Bayesian Statistics

Graphical Models for Collaborative Filtering

CS 188: Artificial Intelligence Spring Today

Decision theory. 1 We may also consider randomized decision rules, where δ maps observed data D to a probability distribution over

Bayesian Inference and MCMC

TUTORIAL 8 SOLUTIONS #

Bayesian Networks. Marcello Cirillo PhD Student 11 th of May, 2007

Computational Perception. Bayesian Inference

Lecture 2: Conjugate priors

Bayesian Methods. David S. Rosenberg. New York University. March 20, 2018

Introduction to Probabilistic Machine Learning

Behavioral Data Mining. Lecture 2

Transcription:

A Bayesian model for event-based trust Elements of a foundation for computational trust Vladimiro Sassone ECS, University of Southampton joint work K. Krukow and M. Nielsen Oxford, 9 March 2007 V. Sassone (Soton) Foundations of Computational Trust 07.03.09 1 / 33

Computational trust Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk? Computer idealisation of trust to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense credential-based trust: e.g., public-key infrastructures, authentication and resource access control, network security. reputation-based trust: e.g., social networks, P2P, trust metrics, probabilistic approaches. trust models: e.g., security policies, languages, game theory. trust in information sources: e.g., information filtering and provenance, content trust, user interaction, social concerns. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 2 / 33

Computational trust Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk? Computer idealisation of trust to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense credential-based trust: e.g., public-key infrastructures, authentication and resource access control, network security. reputation-based trust: e.g., social networks, P2P, trust metrics, probabilistic approaches. trust models: e.g., security policies, languages, game theory. trust in information sources: e.g., information filtering and provenance, content trust, user interaction, social concerns. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 2 / 33

Computational trust Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk? Computer idealisation of trust to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense credential-based trust: e.g., public-key infrastructures, authentication and resource access control, network security. reputation-based trust: e.g., social networks, P2P, trust metrics, probabilistic approaches. trust models: e.g., security policies, languages, game theory. trust in information sources: e.g., information filtering and provenance, content trust, user interaction, social concerns. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 2 / 33

Computational trust Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk? Computer idealisation of trust to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense credential-based trust: e.g., public-key infrastructures, authentication and resource access control, network security. reputation-based trust: e.g., social networks, P2P, trust metrics, probabilistic approaches. trust models: e.g., security policies, languages, game theory. trust in information sources: e.g., information filtering and provenance, content trust, user interaction, social concerns. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 2 / 33

Computational trust Trust is an ineffable notion that permeates very many things. What trust are we going to have in this talk? Computer idealisation of trust to support decision-making in open networks. No human emotion, nor philosophical/sociological concept. Gathering prominence in open applications involving safety guarantees in a wide sense credential-based trust: e.g., public-key infrastructures, authentication and resource access control, network security. reputation-based trust: e.g., social networks, P2P, trust metrics, probabilistic approaches. trust models: e.g., security policies, languages, game theory. trust in information sources: e.g., information filtering and provenance, content trust, user interaction, social concerns. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 2 / 33

Trust and reputation systems Reputation behavioural: perception that an agent creates through past actions about its intentions and norms of behaviour. social: calculated on the basis of observations made by others. An agent s reputation may affect the trust that others have toward it. Trust subjective: a level of the subjective expectation an agent has about another s future behaviour based on the history of their encounters and of hearsay. Confidence in the trust assessment is also a parameter of importance. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 3 / 33

Trust and reputation systems Reputation behavioural: perception that an agent creates through past actions about its intentions and norms of behaviour. social: calculated on the basis of observations made by others. An agent s reputation may affect the trust that others have toward it. Trust subjective: a level of the subjective expectation an agent has about another s future behaviour based on the history of their encounters and of hearsay. Confidence in the trust assessment is also a parameter of importance. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 3 / 33

Trust and security E.g.: Reputation-based access control p s trust in q s actions at time t, is determined by p s observations of q s behaviour up until time t according to a given policy ψ. Example You download what claims to be a new cool browser from some unknown site. Your trust policy may be: allow the program to connect to a remote site if and only if it has neither tried to open a local file that it has not created, nor to modify a file it has created, nor to create a sub-process. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 4 / 33

Trust and security E.g.: Reputation-based access control p s trust in q s actions at time t, is determined by p s observations of q s behaviour up until time t according to a given policy ψ. Example You download what claims to be a new cool browser from some unknown site. Your trust policy may be: allow the program to connect to a remote site if and only if it has neither tried to open a local file that it has not created, nor to modify a file it has created, nor to create a sub-process. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 4 / 33

Outline 1 Some computational trust systems 2 Towards model comparison 3 Modelling behavioural information Event structures as a trust model 4 Probabilistic event structures 5 A Bayesian event model V. Sassone (Soton) Foundations of Computational Trust 07.03.09 5 / 33

Outline 1 Some computational trust systems 2 Towards model comparison 3 Modelling behavioural information Event structures as a trust model 4 Probabilistic event structures 5 A Bayesian event model V. Sassone (Soton) Foundations of Computational Trust 07.03.09 6 / 33

Simple Probabilistic Systems The model λ θ : Each principal p behaves in each interaction according to a fixed and independent probability θ p of success (and therefore 1 θ p of failure ). The framework: Interface (Trust computation algorithm, A): Input: A sequence h = x 1 x 2 x n for n 0 and x i {s, f}. Output: A probability distribution π : {s, f} [0, 1]. Goal: Output π approximates (θ p, 1 θ p ) as well as possible, under the hypothesis that input h is the outcome of interactions with p. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 7 / 33

Simple Probabilistic Systems The model λ θ : Each principal p behaves in each interaction according to a fixed and independent probability θ p of success (and therefore 1 θ p of failure ). The framework: Interface (Trust computation algorithm, A): Input: A sequence h = x 1 x 2 x n for n 0 and x i {s, f}. Output: A probability distribution π : {s, f} [0, 1]. Goal: Output π approximates (θ p, 1 θ p ) as well as possible, under the hypothesis that input h is the outcome of interactions with p. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 7 / 33

Maximum likelihood (Despotovic and Aberer) Trust computation A 0 A 0 (s h) = N s(h) h A 0 (f h) = N f(h) h N x (h) = number of x s in h Bayesian analysis inspired by λ β model: f (θ α β) θ α 1 (1 θ) β 1 Properties: Well defined semantics: A 0 (s h) is interpreted as a probability of success in the next interaction. Solidly based on probability theory and Bayesian analysis. Formal result: A 0 (s h) θ p as h. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 8 / 33

Maximum likelihood (Despotovic and Aberer) Trust computation A 0 A 0 (s h) = N s(h) h A 0 (f h) = N f(h) h N x (h) = number of x s in h Bayesian analysis inspired by λ β model: f (θ α β) θ α 1 (1 θ) β 1 Properties: Well defined semantics: A 0 (s h) is interpreted as a probability of success in the next interaction. Solidly based on probability theory and Bayesian analysis. Formal result: A 0 (s h) θ p as h. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 8 / 33

Beta models (Mui et al) Even more tightly inspired by Bayesian analysis and by λ β Trust computation A 1 A 1 (s h) = N s(h) + 1 h + 2 A 1 (f h) = N f(h) + 1 h + 2 N x (h) = number of x s in h Properties: Well defined semantics: A 1 (s h) is interpreted as a probability of success in the next interaction. Solidly based on probability theory and Bayesian analysis. Formal result: Chernoff bound Prob[error ɛ] 2e 2mɛ2, where m is the number of trials. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 9 / 33

Beta models (Mui et al) Even more tightly inspired by Bayesian analysis and by λ β Trust computation A 1 A 1 (s h) = N s(h) + 1 h + 2 A 1 (f h) = N f(h) + 1 h + 2 N x (h) = number of x s in h Properties: Well defined semantics: A 1 (s h) is interpreted as a probability of success in the next interaction. Solidly based on probability theory and Bayesian analysis. Formal result: Chernoff bound Prob[error ɛ] 2e 2mɛ2, where m is the number of trials. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 9 / 33

Our elements of foundation Recall the framework Interface (Trust computation algorithm, A): Input: A sequence h = x 1 x 2 x n for n 0 and x i {s, f}. Output: A probability distribution π : {s, f} [0, 1]. Goal: Output π approximates (θ p, 1 θ p ) as well as possible, under the hypothesis that input h is the outcome of interactions with p. We would like to consolidate in two directions: 1 model comparison 2 complex event model V. Sassone (Soton) Foundations of Computational Trust 07.03.09 10 / 33

Our elements of foundation Recall the framework Interface (Trust computation algorithm, A): Input: A sequence h = x 1 x 2 x n for n 0 and x i {s, f}. Output: A probability distribution π : {s, f} [0, 1]. Goal: Output π approximates (θ p, 1 θ p ) as well as possible, under the hypothesis that input h is the outcome of interactions with p. We would like to consolidate in two directions: 1 model comparison 2 complex event model V. Sassone (Soton) Foundations of Computational Trust 07.03.09 10 / 33

Outline 1 Some computational trust systems 2 Towards model comparison 3 Modelling behavioural information Event structures as a trust model 4 Probabilistic event structures 5 A Bayesian event model V. Sassone (Soton) Foundations of Computational Trust 07.03.09 11 / 33

Cross entropy An information-theoretic distance on distributions Cross entropy of distributions p, q : {o 1,..., o m } [0, 1]. D(p q) = m p(o i ) log ( p(o i )/q(o i ) ) i=1 It holds 0 D(p q), and D(p q) = 0 iff p = q. Established measure in statistics for comparing distributions. Information-theoretic: the average amount of information discriminating p from q. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 12 / 33

Cross entropy An information-theoretic distance on distributions Cross entropy of distributions p, q : {o 1,..., o m } [0, 1]. D(p q) = m p(o i ) log ( p(o i )/q(o i ) ) i=1 It holds 0 D(p q), and D(p q) = 0 iff p = q. Established measure in statistics for comparing distributions. Information-theoretic: the average amount of information discriminating p from q. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 12 / 33

Expected cross entropy A measure on probabilistic trust algorithms Goal of a probabilistic trust algorithm A: given a history X, approximate a distribution on the outcomes O = {o 1,..., o m }. Different histories X result in different output distributions A( X). Expected cross entropy from λ to A ED n (λ A) = X O n Prob(X λ) D(Prob( X λ) A( X)) V. Sassone (Soton) Foundations of Computational Trust 07.03.09 13 / 33

Expected cross entropy A measure on probabilistic trust algorithms Goal of a probabilistic trust algorithm A: given a history X, approximate a distribution on the outcomes O = {o 1,..., o m }. Different histories X result in different output distributions A( X). Expected cross entropy from λ to A ED n (λ A) = X O n Prob(X λ) D(Prob( X λ) A( X)) V. Sassone (Soton) Foundations of Computational Trust 07.03.09 13 / 33

An application of cross entropy (1/2) Consider the beta model λ β and the algorithms A 0 of maximum likelihood (Despotovic et al.) and A 1 beta (Mui et al.). Theorem If θ = 0 or θ = 1 then A 0 computes the exact distribution, whereas A 1 does not. That is, for all n > 0 we have: ED n (λ β A 0 ) = 0 < ED n (λ β A 1 ) If 0 < θ < 1, then ED n (λ β A 0 ) =, and A 1 is always better. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 14 / 33

An application of cross entropy (1/2) Consider the beta model λ β and the algorithms A 0 of maximum likelihood (Despotovic et al.) and A 1 beta (Mui et al.). Theorem If θ = 0 or θ = 1 then A 0 computes the exact distribution, whereas A 1 does not. That is, for all n > 0 we have: ED n (λ β A 0 ) = 0 < ED n (λ β A 1 ) If 0 < θ < 1, then ED n (λ β A 0 ) =, and A 1 is always better. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 14 / 33

An application of cross entropy (2/2) A parametric algorithm A ɛ A ɛ (s h) = N s(h) + ɛ h + 2ɛ, A ɛ(f h) = N f(h) + ɛ h + 2ɛ Theorem For any θ [0, 1], θ 1/2 there exists ɛ [0, ) that minimises ED n (λ β A ɛ ), simultaneously for all n. Furthermore, ED n (λ β A ɛ ) is a decreasing function of ɛ on the interval (0, ɛ), and increasing on ( ɛ, ). V. Sassone (Soton) Foundations of Computational Trust 07.03.09 15 / 33

An application of cross entropy (2/2) A parametric algorithm A ɛ A ɛ (s h) = N s(h) + ɛ h + 2ɛ, A ɛ(f h) = N f(h) + ɛ h + 2ɛ Theorem For any θ [0, 1], θ 1/2 there exists ɛ [0, ) that minimises ED n (λ β A ɛ ), simultaneously for all n. Furthermore, ED n (λ β A ɛ ) is a decreasing function of ɛ on the interval (0, ɛ), and increasing on ( ɛ, ). V. Sassone (Soton) Foundations of Computational Trust 07.03.09 15 / 33

An application of cross entropy (2/2) A parametric algorithm A ɛ A ɛ (s h) = N s(h) + ɛ h + 2ɛ, A ɛ(f h) = N f(h) + ɛ h + 2ɛ Theorem For any θ [0, 1], θ 1/2 there exists ɛ [0, ) that minimises ED n (λ β A ɛ ), simultaneously for all n. Furthermore, ED n (λ β A ɛ ) is a decreasing function of ɛ on the interval (0, ɛ), and increasing on ( ɛ, ). That is, unless behaviour is completely unbiased, there exists a unique best A ɛ algorithm that for all n outperforms all the others. If θ = 1/2, the larger the ɛ, the better. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 15 / 33

An application of cross entropy (2/2) A parametric algorithm A ɛ A ɛ (s h) = N s(h) + ɛ h + 2ɛ, A ɛ(f h) = N f(h) + ɛ h + 2ɛ Theorem For any θ [0, 1], θ 1/2 there exists ɛ [0, ) that minimises ED n (λ β A ɛ ), simultaneously for all n. Furthermore, ED n (λ β A ɛ ) is a decreasing function of ɛ on the interval (0, ɛ), and increasing on ( ɛ, ). Algorithm A 0 is optimal for θ = 0 and for θ = 1. Algorithm A 1 is optimal for θ = 1 2 ± 1. 12 V. Sassone (Soton) Foundations of Computational Trust 07.03.09 15 / 33

Outline 1 Some computational trust systems 2 Towards model comparison 3 Modelling behavioural information Event structures as a trust model 4 Probabilistic event structures 5 A Bayesian event model V. Sassone (Soton) Foundations of Computational Trust 07.03.09 16 / 33

A trust model based on event structures Move from O = {s, f} to complex outcomes Interactions and protocols At an abstract level, entities in a distributed system interact according to protocols; Information about an external entity is just information about (the outcome of) a number of (past) protocol runs with that entity. Events as model of information A protocol can be specified as a concurrent process, at different levels of abstractions. Event structures were invented to give formal semantics to truely concurrent processes, expressing causation and conflict. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 17 / 33

A model for behavioural information ES = (E,, #), with E a set of events, and # relations on E. Information about a session is a finite set of events x E, called a configuration (which is conflict-free and causally-closed ). Information about several interactions is a sequence of outcomes h = x 1 x 2 x n CES, called a history. ebay (simplified) example: confirm time-out pay ignore positive neutral negative e.g., h = {pay, confirm, pos} {pay, confirm, neu} {pay} V. Sassone (Soton) Foundations of Computational Trust 07.03.09 18 / 33

A model for behavioural information ES = (E,, #), with E a set of events, and # relations on E. Information about a session is a finite set of events x E, called a configuration (which is conflict-free and causally-closed ). Information about several interactions is a sequence of outcomes h = x 1 x 2 x n CES, called a history. ebay (simplified) example: confirm time-out pay ignore positive neutral negative e.g., h = {pay, confirm, pos} {pay, confirm, neu} {pay} V. Sassone (Soton) Foundations of Computational Trust 07.03.09 18 / 33

A model for behavioural information ES = (E,, #), with E a set of events, and # relations on E. Information about a session is a finite set of events x E, called a configuration (which is conflict-free and causally-closed ). Information about several interactions is a sequence of outcomes h = x 1 x 2 x n CES, called a history. ebay (simplified) example: confirm time-out pay ignore positive neutral negative e.g., h = {pay, confirm, pos} {pay, confirm, neu} {pay} V. Sassone (Soton) Foundations of Computational Trust 07.03.09 18 / 33

A model for behavioural information ES = (E,, #), with E a set of events, and # relations on E. Information about a session is a finite set of events x E, called a configuration (which is conflict-free and causally-closed ). Information about several interactions is a sequence of outcomes h = x 1 x 2 x n CES, called a history. ebay (simplified) example: confirm time-out pay ignore positive neutral negative e.g., h = {pay, confirm, pos} {pay, confirm, neu} {pay} V. Sassone (Soton) Foundations of Computational Trust 07.03.09 18 / 33

A model for behavioural information ES = (E,, #), with E a set of events, and # relations on E. Information about a session is a finite set of events x E, called a configuration (which is conflict-free and causally-closed ). Information about several interactions is a sequence of outcomes h = x 1 x 2 x n CES, called a history. ebay (simplified) example: confirm time-out pay ignore positive neutral negative e.g., h = {pay, confirm, pos} {pay, confirm, neu} {pay} V. Sassone (Soton) Foundations of Computational Trust 07.03.09 18 / 33

Running example: interactions over an e-purse authentic forged correct incorrect reject grant V. Sassone (Soton) Foundations of Computational Trust 07.03.09 19 / 33

Modelling outcomes and behaviour Outcomes are (maximal) configurations The e-purse example: {g,a,c} {g,f,c} {g,a,i} {g,f,i} {g,c} {g,a} {g,f} {g,i} {r} {g} Behaviour is a sequence of outcomes V. Sassone (Soton) Foundations of Computational Trust 07.03.09 20 / 33

Modelling outcomes and behaviour Outcomes are (maximal) configurations The e-purse example: {g,a,c} {g,f,c} {g,a,i} {g,f,i} {g,c} {g,a} {g,f} {g,i} {r} {g} Behaviour is a sequence of outcomes V. Sassone (Soton) Foundations of Computational Trust 07.03.09 20 / 33

Modelling outcomes and behaviour Outcomes are (maximal) configurations The e-purse example: {g,a,c} {g,f,c} {g,a,i} {g,f,i} {g,c} {g,a} {g,f} {g,i} {r} {g} Behaviour is a sequence of outcomes V. Sassone (Soton) Foundations of Computational Trust 07.03.09 20 / 33

Modelling outcomes and behaviour Outcomes are (maximal) configurations The e-purse example: {g,a,c} {g,f,c} {g,a,i} {g,f,i} {g,c} {g,a} {g,f} {g,i} {r} {g} Behaviour is a sequence of outcomes V. Sassone (Soton) Foundations of Computational Trust 07.03.09 20 / 33

Outline 1 Some computational trust systems 2 Towards model comparison 3 Modelling behavioural information Event structures as a trust model 4 Probabilistic event structures 5 A Bayesian event model V. Sassone (Soton) Foundations of Computational Trust 07.03.09 21 / 33

Confusion-free event structures (Varacca et al) Immediate conflict # µ : e # e and there is x that enables both. Confusion free: # µ is transitive and e # µ e implies [e) = [e ). Cell: maximal c E such that e, e c implies e # µ e. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 22 / 33

Confusion-free event structures (Varacca et al) Immediate conflict # µ : e # e and there is x that enables both. Confusion free: # µ is transitive and e # µ e implies [e) = [e ). Cell: maximal c E such that e, e c implies e # µ e. {g,a,c} {g,f,c} {g,a,i} {g,f,i} {g,c} {g,a} {g,f} {g,i} {r} {g} V. Sassone (Soton) Foundations of Computational Trust 07.03.09 22 / 33

Confusion-free event structures (Varacca et al) Immediate conflict # µ : e # e and there is x that enables both. Confusion free: # µ is transitive and e # µ e implies [e) = [e ). Cell: maximal c E such that e, e c implies e # µ e. So, there are three cells in the e-purse event structure authentic forged correct incorrect reject grant Cell valuation: a function p : E [0, 1] such that p[c] = 1, for all c. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 22 / 33

Cell valuation 1/4 reject 4/5 1/5 3/5 2/5 authentic forged correct incorrect grant 3/4 9/25 {g,a,c} {g,f,c} 9/100 {g,a,i} 9/20 {g,c} {g,a} 3/5 3/20 {g,f} {g,i} 3/10 1/4 {r} {g} 3/4 1 6/25 {g,f,i} 6/100 V. Sassone (Soton) Foundations of Computational Trust 07.03.09 23 / 33

Cell valuation 1/4 reject 4/5 1/5 3/5 2/5 authentic forged correct incorrect grant 3/4 9/25 {g,a,c} {g,f,c} 9/100 {g,a,i} 9/20 {g,c} {g,a} 3/5 3/20 {g,f} {g,i} 3/10 1/4 {r} {g} 3/4 1 6/25 {g,f,i} 6/100 V. Sassone (Soton) Foundations of Computational Trust 07.03.09 23 / 33

Properties of cell valuations Define p(x) = e x p(e). Then p( ) = 1; p(x) p(x ) if x x ; p is a probability distribution on maximal configurations. 9/25 {g,a,c} {g,f,c} 9/100 {g,a,i} 9/20 {g,c} {g,a} 3/5 3/20 {g,f} {g,i} 3/10 1/4 {r} {g} 3/4 1 6/25 {g,f,i} 6/100 So, p(x) is the probability that x is contained in the final outcome. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 24 / 33

Outline 1 Some computational trust systems 2 Towards model comparison 3 Modelling behavioural information Event structures as a trust model 4 Probabilistic event structures 5 A Bayesian event model V. Sassone (Soton) Foundations of Computational Trust 07.03.09 25 / 33

Estimating cell valuations How to assign valuations to cells? They are the model s unknowns. Theorem (Bayes) Prob[Θ X λ] Prob[X Θ λ] Prob[Θ λ] A second-order notion: we not are interested in X or its probability, but in the expected value of Θ! So, we will: start with a prior hypothesis Θ; this will be a cell valuation; record the events X as they happen during the interactions; compute the posterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense). But: the posteriors need to be (interpretable as) a cell valuations. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 26 / 33

Estimating cell valuations How to assign valuations to cells? They are the model s unknowns. Theorem (Bayes) Prob[Θ X λ] Prob[X Θ λ] Prob[Θ λ] A second-order notion: we not are interested in X or its probability, but in the expected value of Θ! So, we will: start with a prior hypothesis Θ; this will be a cell valuation; record the events X as they happen during the interactions; compute the posterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense). But: the posteriors need to be (interpretable as) a cell valuations. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 26 / 33

Estimating cell valuations How to assign valuations to cells? They are the model s unknowns. Theorem (Bayes) Prob[Θ X λ] Prob[X Θ λ] Prob[Θ λ] A second-order notion: we not are interested in X or its probability, but in the expected value of Θ! So, we will: start with a prior hypothesis Θ; this will be a cell valuation; record the events X as they happen during the interactions; compute the posterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense). But: the posteriors need to be (interpretable as) a cell valuations. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 26 / 33

Estimating cell valuations How to assign valuations to cells? They are the model s unknowns. Theorem (Bayes) Prob[Θ X λ] Prob[X Θ λ] Prob[Θ λ] A second-order notion: we not are interested in X or its probability, but in the expected value of Θ! So, we will: start with a prior hypothesis Θ; this will be a cell valuation; record the events X as they happen during the interactions; compute the posterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense). But: the posteriors need to be (interpretable as) a cell valuations. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 26 / 33

Estimating cell valuations How to assign valuations to cells? They are the model s unknowns. Theorem (Bayes) Prob[Θ X λ] Prob[X Θ λ] Prob[Θ λ] A second-order notion: we not are interested in X or its probability, but in the expected value of Θ! So, we will: start with a prior hypothesis Θ; this will be a cell valuation; record the events X as they happen during the interactions; compute the posterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense). But: the posteriors need to be (interpretable as) a cell valuations. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 26 / 33

Estimating cell valuations How to assign valuations to cells? They are the model s unknowns. Theorem (Bayes) Prob[Θ X λ] Prob[X Θ λ] Prob[Θ λ] A second-order notion: we not are interested in X or its probability, but in the expected value of Θ! So, we will: start with a prior hypothesis Θ; this will be a cell valuation; record the events X as they happen during the interactions; compute the posterior; this is a new model fitting better with the evidence and allowing us better predictions (in a precise sense). But: the posteriors need to be (interpretable as) a cell valuations. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 26 / 33

Cells vs eventless outcomes Let c 1,..., c M be the set of cells of E, with c i = {e i 1,..., ei K i }. A cell valuation assigns a distribution Θ ci to each c i, the same way as an eventless model assigns a distribution θ to {s, f}. The occurrence of an x from {s, f} is a random process with two outcomes, a binomial (Bernoulli) trial on θ. The occurrence of an event from cell c i is a random process with K i outcomes. That is, a multinomial trial on Θ ci. To exploit this analogy we only need to lift the λ β model to a model based on multinomial experiments. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 27 / 33

Cells vs eventless outcomes Let c 1,..., c M be the set of cells of E, with c i = {e i 1,..., ei K i }. A cell valuation assigns a distribution Θ ci to each c i, the same way as an eventless model assigns a distribution θ to {s, f}. The occurrence of an x from {s, f} is a random process with two outcomes, a binomial (Bernoulli) trial on θ. The occurrence of an event from cell c i is a random process with K i outcomes. That is, a multinomial trial on Θ ci. To exploit this analogy we only need to lift the λ β model to a model based on multinomial experiments. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 27 / 33

Cells vs eventless outcomes Let c 1,..., c M be the set of cells of E, with c i = {e i 1,..., ei K i }. A cell valuation assigns a distribution Θ ci to each c i, the same way as an eventless model assigns a distribution θ to {s, f}. The occurrence of an x from {s, f} is a random process with two outcomes, a binomial (Bernoulli) trial on θ. The occurrence of an event from cell c i is a random process with K i outcomes. That is, a multinomial trial on Θ ci. To exploit this analogy we only need to lift the λ β model to a model based on multinomial experiments. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 27 / 33

Cells vs eventless outcomes Let c 1,..., c M be the set of cells of E, with c i = {e i 1,..., ei K i }. A cell valuation assigns a distribution Θ ci to each c i, the same way as an eventless model assigns a distribution θ to {s, f}. The occurrence of an x from {s, f} is a random process with two outcomes, a binomial (Bernoulli) trial on θ. The occurrence of an event from cell c i is a random process with K i outcomes. That is, a multinomial trial on Θ ci. To exploit this analogy we only need to lift the λ β model to a model based on multinomial experiments. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 27 / 33

A bit of magic: the Dirichlet probability distribution The Dirichlet family D(Θ α) Θ α 1 1 1 Θ α K 1 K Theorem The Dirichlet family is a conjugate prior for multinomial trials. That is, if Prob[Θ λ] is D(Θ α 1,..., α K ) and Prob[X Θ λ] follows the law of multinomial trials Θ n 1 1 Θn K K, then Prob[Θ X λ] is D(Θ α 1 + n 1,..., α K + n K ) according to Bayes. So, we start with a family D(Θ ci α ci ), and then use multinomial trials X : E ω to keep updating the valuation as D(Θ ci α ci + X ci ). V. Sassone (Soton) Foundations of Computational Trust 07.03.09 28 / 33

A bit of magic: the Dirichlet probability distribution The Dirichlet family D(Θ α) Θ α 1 1 1 Θ α K 1 K Theorem The Dirichlet family is a conjugate prior for multinomial trials. That is, if Prob[Θ λ] is D(Θ α 1,..., α K ) and Prob[X Θ λ] follows the law of multinomial trials Θ n 1 1 Θn K K, then Prob[Θ X λ] is D(Θ α 1 + n 1,..., α K + n K ) according to Bayes. So, we start with a family D(Θ ci α ci ), and then use multinomial trials X : E ω to keep updating the valuation as D(Θ ci α ci + X ci ). V. Sassone (Soton) Foundations of Computational Trust 07.03.09 28 / 33

A bit of magic: the Dirichlet probability distribution The Dirichlet family D(Θ α) Θ α 1 1 1 Θ α K 1 K Theorem The Dirichlet family is a conjugate prior for multinomial trials. That is, if Prob[Θ λ] is D(Θ α 1,..., α K ) and Prob[X Θ λ] follows the law of multinomial trials Θ n 1 1 Θn K K, then Prob[Θ X λ] is D(Θ α 1 + n 1,..., α K + n K ) according to Bayes. So, we start with a family D(Θ ci α ci ), and then use multinomial trials X : E ω to keep updating the valuation as D(Θ ci α ci + X ci ). V. Sassone (Soton) Foundations of Computational Trust 07.03.09 28 / 33

The Bayesian process Start with a uniform distribution for each cell. 1/2 reject 1/2 1/2 1/2 1/2 authentic forged correct incorrect grant 1/2 Theorem E[Θ e i j X λ] = α e i + X(e i j j ) Ki k=1 (α ek i + X(ek i )) Corollary E[next outcome is x X λ] = e x E[Θ e X λ] V. Sassone (Soton) Foundations of Computational Trust 07.03.09 29 / 33

The Bayesian process Suppose that X = {r 2, g 8, a 7, f 1, c 3, i 5}. Then 1/2 reject 1/2 1/2 1/2 1/2 authentic forged correct incorrect grant 1/2 Theorem E[Θ e i j X λ] = α e i + X(e i j j ) Ki k=1 (α ek i + X(ek i )) Corollary E[next outcome is x X λ] = e x E[Θ e X λ] V. Sassone (Soton) Foundations of Computational Trust 07.03.09 29 / 33

The Bayesian process Suppose that X = {r 2, g 8, a 7, f 1, c 3, i 5}. Then 1/4 reject 4/5 1/5 3/5 2/5 authentic forged correct incorrect grant 3/4 Theorem E[Θ e i j X λ] = α e i + X(e i j j ) Ki k=1 (α ek i + X(ek i )) Corollary E[next outcome is x X λ] = e x E[Θ e X λ] V. Sassone (Soton) Foundations of Computational Trust 07.03.09 29 / 33

The Bayesian process Suppose that X = {r 2, g 8, a 7, f 1, c 3, i 5}. Then 1/4 reject 4/5 1/5 3/5 2/5 authentic forged correct incorrect grant 3/4 Theorem E[Θ e i j X λ] = α e i + X(e i j j ) Ki k=1 (α ek i + X(ek i )) Corollary E[next outcome is x X λ] = e x E[Θ e X λ] V. Sassone (Soton) Foundations of Computational Trust 07.03.09 29 / 33

The Bayesian process Suppose that X = {r 2, g 8, a 7, f 1, c 3, i 5}. Then 1/4 reject 4/5 1/5 3/5 2/5 authentic forged correct incorrect grant 3/4 Theorem E[Θ e i j X λ] = α e i + X(e i j j ) Ki k=1 (α ek i + X(ek i )) Corollary E[next outcome is x X λ] = e x E[Θ e X λ] V. Sassone (Soton) Foundations of Computational Trust 07.03.09 29 / 33

Interpretation of results Lifted the trust computational algorithms based on λ β to our event-base models by replacing Binomials (Bernoulli) trials multinomial trials; β-distribution Dirichlet distribution. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 30 / 33

Future directions (1/2) Hidden Markov Models Probability parameters can change as the internal state change, probabilistically. HMM is λ = (A, B, π), where A is a Markov chain, describing state transitions; B is family of distributions B s : O [0, 1]; π is the initial state distribution. 1 π 1 = 1 B 1 (a) =.95 B 1 (b) =.05.01.25 O = {a, b} 2 π 2 = 0 B 2 (a) =.05 B 2 (b) =.95 V. Sassone (Soton) Foundations of Computational Trust 07.03.09 31 / 33

Future directions (1/2) Hidden Markov Models Probability parameters can change as the internal state change, probabilistically. HMM is λ = (A, B, π), where A is a Markov chain, describing state transitions; B is family of distributions B s : O [0, 1]; π is the initial state distribution. 1 π 1 = 1 B 1 (a) =.95 B 1 (b) =.05.01.25 O = {a, b} 2 π 2 = 0 B 2 (a) =.05 B 2 (b) =.95 V. Sassone (Soton) Foundations of Computational Trust 07.03.09 31 / 33

Future directions (2/2) Hidden Markov Models 1 π 1 = 1 B 1 (a) =.95 B 1 (b) =.05.01.25 O = {a, b} 2 π 2 = 0 B 2 (a) =.05 B 2 (b) =.95 Bayesian analysis: What models best explain (and thus predict) observations? How to approximate a HMM from a sequence of observations? History h = a 10 b 2. A counting algorithm would then assign high probability to a occurring next. But he last two b s suggest a state change might have occurred, which would in reality make that probability very low. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 32 / 33

Future directions (2/2) Hidden Markov Models 1 π 1 = 1 B 1 (a) =.95 B 1 (b) =.05.01.25 O = {a, b} 2 π 2 = 0 B 2 (a) =.05 B 2 (b) =.95 Bayesian analysis: What models best explain (and thus predict) observations? How to approximate a HMM from a sequence of observations? History h = a 10 b 2. A counting algorithm would then assign high probability to a occurring next. But he last two b s suggest a state change might have occurred, which would in reality make that probability very low. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 32 / 33

Summary A framework for trust and reputation systems applications to security and history-based access control. Bayesian approach to observations and approximations, formal results based on probability theory. Towards model comparison and complex-outcomes Bayesian model. Future work Dynamic models with variable structure. Better integration of reputation in the model. Relationships with game-theoretic models. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 33 / 33

Summary A framework for trust and reputation systems applications to security and history-based access control. Bayesian approach to observations and approximations, formal results based on probability theory. Towards model comparison and complex-outcomes Bayesian model. Future work Dynamic models with variable structure. Better integration of reputation in the model. Relationships with game-theoretic models. V. Sassone (Soton) Foundations of Computational Trust 07.03.09 33 / 33