Review Markov Logic Networks Mathew Richardson Pedro Domingos Xinran(Sean) Luo, u0866707.

Post on 18-Dec-2015

217 views 1 download

Transcript of Review Markov Logic Networks Mathew Richardson Pedro Domingos Xinran(Sean) Luo, u0866707.

Review

Markov Logic NetworksMathew Richardson

Pedro Domingos

Xinran(Sean) Luo, u0866707

Overview

Markov NetworksFirst-order LogicMarkov Logic NetworksInferenceLearningExperiments

Markov Networks

Also known as Markov random fields.Composed of

◦An undirected graph G◦A set of potential function φk

Function:

And x{k} is the state of kth clique. Z is partition function:

Markov Networks

Log-linear models: each clique potential function is replaced by an exponentiated weighted sum of features of the state:

Overview

Markov NetworksFirst-order LogicMarkov Logic NetworksInferenceLearningExperiments

First-order Logic

A set of sentences or formulas in first-order logic.

Constructed by the symbols: connective, quanitfier, constants, variables, functions, predicates, etc.

Syntax for First-Order Logic

Connective → ∨ | ∧ | ⇒ | ⇔

Quanitfier → ∃ | ∀

Constant → A | John | Car1

Variable → x | y | z |...

Predicate → Brother | Owns | ...

Function → father-of | plus | ...

Overview

Markov NetworksFirst-order LogicMarkov Logic NetworksInferenceLearningExperiments

Markov Logic Networks

A Markov Logic Network (MLN) L is a set of pairs (Fi, wi) where◦Fi is a formula in first-order logic◦wi is a real number

Features of Markov Logic NetworkIt defines a Markov network ML,C

with:◦For each possible grounding of each

predicate in L, there is a binary node in ML,C. If the ground atom is true, the node is 1. Otherwise, 0.

◦For each possible grounding of each formula in L, there is a feature node in ML,C. If the ground formula is true, the feature is 1. Otherwise, 0.

Ground TermA ground term is a term

containing no variables.Ground Markov Network: MLNs

have certain regularities in structure and parameters.

MLN is template for ground Markov networks

Example of an MLN

Suppose we have two constants: Anna (A) and Bob (B)

Cancer(A)

Smokes(A) Smokes(B)

Cancer(B)

Example of an MLN

Suppose we have two constants: Anna (A) and Bob (B)

Friends(A,A)

Friends(B,A)

Friends(A,B)

Friends(B,B)

Example of an MLN

Suppose we have two constants: Anna (A) and Bob (B)

Cancer(A)

Smokes(A)Friends(A,A)

Friends(B,A)

Smokes(B)

Friends(A,B)

Cancer(B)

Friends(B,B)

MLNs and First-Order LogicFirst-order KB assign a weight to

each formula MLN.

Satisfiable KB + positive weights to each formula MLN represents a uniform distribution over the worlds.

MLN produce useful results even contains contradictions.

Overview

Markov NetworksFirst-order LogicMarkov Logic NetworksInferenceLearningExperiments

InferenceAlready know the probability of

formula F1, what is the probability of F2?

Two steps (Approximate):◦Find the minimal subset of the

ground network.◦(MCMC-Gibbs algorithm) Sampling

one ground atom given its Markov blanket (the set of ground atoms that appear in some grounding of a formula with it).

InferenceThe probability of a ground atom

Xl when its Markov blanket Bl is in state bl is:

is the value of 0 or 1.

Overview

Markov NetworksFirst-order LogicMarkov Logic NetworksInferenceLearningExperiments

LearningData is from a relational databaseStrategy:

◦Counting the number of true groundings of formula in DB.

◦Use Pseudo-Likelihood to get gradient.

is the number of true groundings of the ith formula when we force Xl =0 and leave the remaining data unchanged, and similarly for

Overview

Markov NetworksFirst-order LogicMarkov Logic NetworksInferenceLearningExperiments

ExperimentsHand-built knowledge base (KB)ILP: CLAUDIEN Markov logic networks (MLNs)

◦Using KB◦Using CLAUDIEN◦Using KB + CLAUDIEN

Bayesian network learnerNaïve Bayes

Results

SummaryMarkov logic networks combine

first-order logic and Markov networks◦Syntax: First-order logic + Positive

Weights◦Semantics: Templates for Markov

networks

Inference: Minimal subset + Gibbs

Learning: Pseudo-likelihood