February 26, 2020

AIMA Chapter 14

In [2]:
from probability import *
from utils import print_table
from notebook import psource, pseudocode, heatmap

BAYESIAN NETWORKS

A Bayesian network is a representation of the joint probability distribution encoding a collection of conditional independence statements.

A Bayes Network is implemented as the class BayesNet. It consisits of a collection of nodes implemented by the class BayesNode. The implementation in the above mentioned classes focuses only on boolean variables. Each node is associated with a variable and it contains a conditional probabilty table (cpt). The cpt represents the probability distribution of the variable conditioned on its parents P(X | parents).

Let us dive into the BayesNode implementation.

In [3]:
psource(BayesNode)

class BayesNode:
    """A conditional probability distribution for a boolean variable,
    P(X | parents). Part of a BayesNet."""

    def __init__(self, X, parents, cpt):
        """X is a variable name, and parents a sequence of variable
        names or a space-separated string.  cpt, the conditional
        probability table, takes one of these forms:

        * A number, the unconditional probability P(X=true). You can
          use this form when there are no parents.

        * A dict {v: p, ...}, the conditional probability distribution
          P(X=true | parent=v) = p. When there's just one parent.

        * A dict {(v1, v2, ...): p, ...}, the distribution P(X=true |
          parent1=v1, parent2=v2, ...) = p. Each key must have as many
          values as there are parents. You can use this form always;
          the first two are just conveniences.

        In all cases the probability of X being false is left implicit,
        since it follows from P(X=true).

        >>> X = BayesNode('X', '', 0.2)
        >>> Y = BayesNode('Y', 'P', {T: 0.2, F: 0.7})
        >>> Z = BayesNode('Z', 'P Q',
        ...    {(T, T): 0.2, (T, F): 0.3, (F, T): 0.5, (F, F): 0.7})
        """
        if isinstance(parents, str):
            parents = parents.split()

        # We store the table always in the third form above.
        if isinstance(cpt, (float, int)):  # no parents, 0-tuple
            cpt = {(): cpt}
        elif isinstance(cpt, dict):
            # one parent, 1-tuple
            if cpt and isinstance(list(cpt.keys())[0], bool):
                cpt = {(v,): p for v, p in cpt.items()}

        assert isinstance(cpt, dict)
        for vs, p in cpt.items():
            assert isinstance(vs, tuple) and len(vs) == len(parents)
            assert all(isinstance(v, bool) for v in vs)
            assert 0 <= p <= 1

        self.variable = X
        self.parents = parents
        self.cpt = cpt
        self.children = []

    def p(self, value, event):
        """Return the conditional probability
        P(X=value | parents=parent_values), where parent_values
        are the values of parents in event. (event must assign each
        parent a value.)
        >>> bn = BayesNode('X', 'Burglary', {T: 0.2, F: 0.625})
        >>> bn.p(False, {'Burglary': False, 'Earthquake': True})
        0.375"""
        assert isinstance(value, bool)
        ptrue = self.cpt[event_values(event, self.parents)]
        return ptrue if value else 1 - ptrue

    def sample(self, event):
        """Sample from the distribution for this variable conditioned
        on event's values for parent_variables. That is, return True/False
        at random according with the conditional probability given the
        parents."""
        return probability(self.p(True, event))

    def __repr__(self):
        return repr((self.variable, ' '.join(self.parents)))

The constructor takes in the name of variable, parents and cpt. Here variable is a the name of the variable like 'Earthquake'. parents should a list or space separate string with variable names of parents. The conditional probability table is a dict {(v1, v2, ...): p, ...}, the distribution P(X=true | parent1=v1, parent2=v2, ...) = p. Here the keys are combination of boolean values that the parents take. The length and order of the values in keys should be same as the supplied parent list/string. In all cases the probability of X being false is left implicit, since it follows from P(X=true).

The example below where we implement the network shown in Figure 14.3 of the book will make this more clear.

The alarm node can be made as follows:

In [4]:
alarm_node = BayesNode('Alarm', ['Burglary', 'Earthquake'], 
                       {(True, True): 0.95,(True, False): 0.94, (False, True): 0.29, (False, False): 0.001})
In [5]:
john_node = BayesNode('JohnCalls', ['Alarm'], {True: 0.90, False: 0.05})
mary_node = BayesNode('MaryCalls', 'Alarm', {(True, ): 0.70, (False, ): 0.01}) # Using string for parents.
# Equivalant to john_node definition.

The general format used for the alarm node always holds. For nodes with no parents we can also use.

In [6]:
burglary_node = BayesNode('Burglary', '', 0.001)
earthquake_node = BayesNode('Earthquake', '', 0.002)

It is possible to use the node for lookup function using the p method. The method takes in two arguments value and event. Event must be a dict of the type {variable:values, ..} The value corresponds to the value of the variable we are interested in (False or True).The method returns the conditional probability P(X=value | parents=parent_values), where parent_values are the values of parents in event. (event must assign each parent a value.)

In [7]:
john_node.p(False, {'Alarm': True, 'Burglary': True}) # P(JohnCalls=False | Alarm=True)
Out[7]:
0.09999999999999998

With all the information about nodes present it is possible to construct a Bayes Network using BayesNet. The BayesNet class does not take in nodes as input but instead takes a list of node_specs. An entry in node_specs is a tuple of the parameters we use to construct a BayesNode namely (X, parents, cpt). node_specs must be ordered with parents before children.

In [8]:
psource(BayesNet)

class BayesNet:
    """Bayesian network containing only boolean-variable nodes."""

    def __init__(self, node_specs=None):
        """Nodes must be ordered with parents before children."""
        self.nodes = []
        self.variables = []
        node_specs = node_specs or []
        for node_spec in node_specs:
            self.add(node_spec)

    def add(self, node_spec):
        """Add a node to the net. Its parents must already be in the
        net, and its variable must not."""
        node = BayesNode(*node_spec)
        assert node.variable not in self.variables
        assert all((parent in self.variables) for parent in node.parents)
        self.nodes.append(node)
        self.variables.append(node.variable)
        for parent in node.parents:
            self.variable_node(parent).children.append(node)

    def variable_node(self, var):
        """Return the node for the variable named var.
        >>> burglary.variable_node('Burglary').variable
        'Burglary'"""
        for n in self.nodes:
            if n.variable == var:
                return n
        raise Exception("No such variable: {}".format(var))

    def variable_values(self, var):
        """Return the domain of var."""
        return [True, False]

    def __repr__(self):
        return 'BayesNet({0!r})'.format(self.nodes)

The constructor of BayesNet takes each item in node_specs and adds a BayesNode to its nodes object variable by calling the add method. add in turn adds node to the net. Its parents must already be in the net, and its variable must not. Thus add allows us to grow a BayesNet given its parents are already present.

burglary global is an instance of BayesNet corresponding to the above example.

T, F = True, False

burglary = BayesNet([
    ('Burglary', '', 0.001),
    ('Earthquake', '', 0.002),
    ('Alarm', 'Burglary Earthquake',
     {(T, T): 0.95, (T, F): 0.94, (F, T): 0.29, (F, F): 0.001}),
    ('JohnCalls', 'Alarm', {T: 0.90, F: 0.05}),
    ('MaryCalls', 'Alarm', {T: 0.70, F: 0.01})
])
In [9]:
burglary
Out[9]:
BayesNet([('Burglary', ''), ('Earthquake', ''), ('Alarm', 'Burglary Earthquake'), ('JohnCalls', 'Alarm'), ('MaryCalls', 'Alarm')])

BayesNet method variable_node allows to reach BayesNode instances inside a Bayes Net. It is possible to modify the cpt of the nodes directly using this method.

In [10]:
type(burglary.variable_node('Alarm'))
Out[10]:
probability.BayesNode
In [11]:
burglary.variable_node('Alarm').cpt
Out[11]:
{(True, True): 0.95,
 (True, False): 0.94,
 (False, True): 0.29,
 (False, False): 0.001}

Exact Inference in Bayesian Networks

A Bayes Network is a more compact representation of the full joint distribution and like full joint distributions allows us to do inference i.e. answer questions about probability distributions of random variables given some evidence.

Exact algorithms don't scale well for larger networks. Approximate algorithms are explained in the next section.

Inference by Enumeration

We apply techniques similar to those used for enumerate_joint_ask and enumerate_joint to draw inference from Bayesian Networks. enumeration_ask and enumerate_all implement the algorithm described in Figure 14.9 of the book.

In [12]:
psource(enumerate_all)

def enumerate_all(variables, e, bn):
    """Return the sum of those entries in P(variables | e{others})
    consistent with e, where P is the joint distribution represented
    by bn, and e{others} means e restricted to bn's other variables
    (the ones other than variables). Parents must precede children in variables."""
    if not variables:
        return 1.0
    Y, rest = variables[0], variables[1:]
    Ynode = bn.variable_node(Y)
    if Y in e:
        return Ynode.p(e[Y], e) * enumerate_all(rest, e, bn)
    else:
        return sum(Ynode.p(y, e) * enumerate_all(rest, extend(e, Y, y), bn)
                   for y in bn.variable_values(Y))

enumerate_all recursively evaluates a general form of the Equation 14.4 in the book.

$$\textbf{P}(X | \textbf{e}) = α \textbf{P}(X, \textbf{e}) = α \sum_{y} \textbf{P}(X, \textbf{e}, \textbf{y})$$

such that P(X, e, y) is written in the form of product of conditional probabilities P(variable | parents(variable)) from the Bayesian Network.

enumeration_ask calls enumerate_all on each value of query variable X and finally normalizes them.

In [13]:
psource(enumeration_ask)

def enumeration_ask(X, e, bn):
    """Return the conditional probability distribution of variable X
    given evidence e, from BayesNet bn. [Figure 14.9]
    >>> enumeration_ask('Burglary', dict(JohnCalls=T, MaryCalls=T), burglary
    ...  ).show_approx()
    'False: 0.716, True: 0.284'"""
    assert X not in e, "Query variable must be distinct from evidence"
    Q = ProbDist(X)
    for xi in bn.variable_values(X):
        Q[xi] = enumerate_all(bn.variables, extend(e, X, xi), bn)
    return Q.normalize()

Let us solve the problem of finding out P(Burglary=True | JohnCalls=True, MaryCalls=True) using the burglary network. enumeration_ask takes three arguments X = variable name, e = Evidence (in form a dict like previously explained), bn = The Bayes Net to do inference on.

In [14]:
ans_dist = enumeration_ask('Burglary', {'JohnCalls': True, 'MaryCalls': True}, burglary)
ans_dist[True]
Out[14]:
0.2841718353643929

Variable Elimination

The enumeration algorithm can be improved substantially by eliminating repeated calculations. In enumeration we join the joint of all hidden variables. This is of exponential size for the number of hidden variables. Variable elimination employes interleaving join and marginalization.

Before we look into the implementation of Variable Elimination we must first familiarize ourselves with Factors.

In general we call a multidimensional array of type P(Y1 ... Yn | X1 ... Xm) a factor where some of Xs and Ys maybe assigned values. Factors are implemented in the probability module as the class Factor. They take as input variables and cpt.

Helper Functions

There are certain helper functions that help creating the cpt for the Factor given the evidence. Let us explore them one by one.

In [15]:
psource(make_factor)

def make_factor(var, e, bn):
    """Return the factor for var in bn's joint distribution given e.
    That is, bn's full joint distribution, projected to accord with e,
    is the pointwise product of these factors for bn's variables."""
    node = bn.variable_node(var)
    variables = [X for X in [var] + node.parents if X not in e]
    cpt = {event_values(e1, variables): node.p(e1[var], e1)
           for e1 in all_events(variables, bn, e)}
    return Factor(variables, cpt)

make_factor is used to create the cpt and variables that will be passed to the constructor of Factor. We use make_factor for each variable. It takes in the arguments var the particular variable, e the evidence we want to do inference on, bn the bayes network.

Here variables for each node refers to a list consisting of the variable itself and the parents minus any variables that are part of the evidence. This is created by finding the node.parents and filtering out those that are not part of the evidence.

The cpt created is the one similar to the original cpt of the node with only rows that agree with the evidence.

In [16]:
psource(all_events)

def all_events(variables, bn, e):
    """Yield every way of extending e with values for all variables."""
    if not variables:
        yield e
    else:
        X, rest = variables[0], variables[1:]
        for e1 in all_events(rest, bn, e):
            for x in bn.variable_values(X):
                yield extend(e1, X, x)

The all_events function is a recursive generator function which yields a key for the orignal cpt which is part of the node. This works by extending evidence related to the node, thus all the output from all_events only includes events that support the evidence. Given all_events is a generator function one such event is returned on every call.

We can try this out using the example on Page 524 of the book. We will make f5(A) = P(m | A)

In [17]:
f5 = make_factor('MaryCalls', {'JohnCalls': True, 'MaryCalls': True}, burglary)
In [18]:
f5
Out[18]:
<probability.Factor at 0x7f6884e4e690>
In [19]:
f5.cpt
Out[19]:
{(True,): 0.7, (False,): 0.01}
In [20]:
f5.variables
Out[20]:
['Alarm']

Here f5.cpt False key gives probability for P(MaryCalls=True | Alarm = False). Due to our representation where we only store probabilities for only in cases where the node variable is True this is the same as the cpt of the BayesNode. Let us try a somewhat different example from the book where evidence is that the Alarm = True

In [21]:
new_factor = make_factor('MaryCalls', {'Alarm': True}, burglary)
In [22]:
new_factor.cpt
Out[22]:
{(True,): 0.7, (False,): 0.30000000000000004}

Here the cpt is for P(MaryCalls | Alarm = True). Therefore the probabilities for True and False sum up to one. Note the difference between both the cases. Again the only rows included are those consistent with the evidence.

Operations on Factors

We are interested in two kinds of operations on factors. Pointwise Product which is used to created joint distributions and Summing Out which is used for marginalization.

In [23]:
psource(Factor.pointwise_product)

    def pointwise_product(self, other, bn):
        """Multiply two factors, combining their variables."""
        variables = list(set(self.variables) | set(other.variables))
        cpt = {event_values(e, variables): self.p(e) * other.p(e)
               for e in all_events(variables, bn, {})}
        return Factor(variables, cpt)

Factor.pointwise_product implements a method of creating a joint via combining two factors. We take the union of variables of both the factors and then generate the cpt for the new factor using all_events function. Note that the given we have eliminated rows that are not consistent with the evidence. Pointwise product assigns new probabilities by multiplying rows similar to that in a database join.

In [24]:
psource(pointwise_product)

def pointwise_product(factors, bn):
    return reduce(lambda f, g: f.pointwise_product(g, bn), factors)

pointwise_product extends this operation to more than two operands where it is done sequentially in pairs of two.

In [25]:
psource(Factor.sum_out)

    def sum_out(self, var, bn):
        """Make a factor eliminating var by summing over its values."""
        variables = [X for X in self.variables if X != var]
        cpt = {event_values(e, variables): sum(self.p(extend(e, var, val))
                                               for val in bn.variable_values(var))
               for e in all_events(variables, bn, {})}
        return Factor(variables, cpt)

Factor.sum_out makes a factor eliminating a variable by summing over its values. Again events_all is used to generate combinations for the rest of the variables.

In [26]:
psource(sum_out)

def sum_out(var, factors, bn):
    """Eliminate var from all factors by summing over its values."""
    result, var_factors = [], []
    for f in factors:
        (var_factors if var in f.variables else result).append(f)
    result.append(pointwise_product(var_factors, bn).sum_out(var, bn))
    return result

sum_out uses both Factor.sum_out and pointwise_product to finally eliminate a particular variable from all factors by summing over its values.

Elimination Ask

The algorithm described in Figure 14.11 of the book is implemented by the function elimination_ask. We use this for inference. The key idea is that we eliminate the hidden variables by interleaving joining and marginalization. It takes in 3 arguments X the query variable, e the evidence variable and bn the Bayes network.

The algorithm creates factors out of Bayes Nodes in reverse order and eliminates hidden variables using sum_out. Finally it takes a point wise product of all factors and normalizes. Let us finally solve the problem of inferring

P(Burglary=True | JohnCalls=True, MaryCalls=True) using variable elimination.

In [27]:
psource(elimination_ask)

def elimination_ask(X, e, bn):
    """Compute bn's P(X|e) by variable elimination. [Figure 14.11]
    >>> elimination_ask('Burglary', dict(JohnCalls=T, MaryCalls=T), burglary
    ...  ).show_approx()
    'False: 0.716, True: 0.284'"""
    assert X not in e, "Query variable must be distinct from evidence"
    factors = []
    for var in reversed(bn.variables):
        factors.append(make_factor(var, e, bn))
        if is_hidden(var, X, e):
            factors = sum_out(var, factors, bn)
    return pointwise_product(factors, bn).normalize()
In [28]:
elimination_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
Out[28]:
'False: 0.716, True: 0.284'

Runtime comparison

Let's see how the runtimes of these two algorithms compare. We expect variable elimination to outperform enumeration by a large margin as we reduce the number of repetitive calculations significantly.

In [29]:
%%timeit
enumeration_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
73.7 µs ± 466 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
In [30]:
%%timeit
elimination_ask('Burglary', dict(JohnCalls=True, MaryCalls=True), burglary).show_approx()
175 µs ± 223 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

We observe that variable elimination was faster than enumeration as we had expected but the gain in speed is not a lot, in fact it is just about 30% faster.
This happened because the bayesian network in question is pretty small, with just 5 nodes, some of which aren't even required in the inference process. For more complicated networks, variable elimination will be significantly faster and runtime will reduce not just by a constant factor, but by a polynomial factor proportional to the number of nodes, due to the reduction in repeated calculations.

Approximate Inference in Bayesian Networks

Exact inference fails to scale for very large and complex Bayesian Networks. This section covers implementation of randomized sampling algorithms, also called Monte Carlo algorithms.

In [31]:
psource(BayesNode.sample)

    def sample(self, event):
        """Sample from the distribution for this variable conditioned
        on event's values for parent_variables. That is, return True/False
        at random according with the conditional probability given the
        parents."""
        return probability(self.p(True, event))

Before we consider the different algorithms in this section let us look at the BayesNode.sample method. It samples from the distribution for this variable conditioned on event's values for parent_variables. That is, return True/False at random according to with the conditional probability given the parents. The probability function is a simple helper from utils module which returns True with the probability passed to it.

Prior Sampling

The idea of Prior Sampling is to sample from the Bayesian Network in a topological order. We start at the top of the network and sample as per P(Xi | parents(Xi) i.e. the probability distribution from which the value is sampled is conditioned on the values already assigned to the variable's parents. This can be thought of as a simulation.

In [32]:
psource(prior_sample)

def prior_sample(bn):
    """Randomly sample from bn's full joint distribution. The result
    is a {variable: value} dict. [Figure 14.13]"""
    event = {}
    for node in bn.nodes:
        event[node.variable] = node.sample(event)
    return event

The function prior_sample implements the algorithm described in Figure 14.13 of the book. Nodes are sampled in the topological order. The old value of the event is passed as evidence for parent values. We will use the Bayesian Network in Figure 14.12 to try out the prior_sample

Traversing the graph in topological order is important. There are two possible topological orderings for this particular directed acyclic graph.

  1. Cloudy -> Sprinkler -> Rain -> Wet Grass
  2. Cloudy -> Rain -> Sprinkler -> Wet Grass

    We can follow any of the two orderings to sample from the network. Any ordering other than these two, however, cannot be used.
    One way to think about this is that Cloudy can be seen as a precondition of both Rain and Sprinkler and just like we have seen in planning, preconditions need to be satisfied before a certain action can be executed.
    We store the samples on the observations. Let us find P(Rain=True) by taking 1000 random samples from the network.
In [33]:
N = 1000
all_observations = [prior_sample(sprinkler) for x in range(N)]

Now we filter to get the observations where Rain = True

In [34]:
rain_true = [observation for observation in all_observations if observation['Rain'] == True]

Finally, we can find P(Rain=True)

In [35]:
answer = len(rain_true) / N
print(answer)
0.497

Sampling this another time might give different results as we have no control over the distribution of the random samples

In [36]:
N = 1000
all_observations = [prior_sample(sprinkler) for x in range(N)]
rain_true = [observation for observation in all_observations if observation['Rain'] == True]
answer = len(rain_true) / N
print(answer)
0.502

To evaluate a conditional distribution. We can use a two-step filtering process. We first separate out the variables that are consistent with the evidence. Then for each value of query variable, we can find probabilities. For example to find P(Cloudy=True | Rain=True). We have already filtered out the values consistent with our evidence in rain_true. Now we apply a second filtering step on rain_true to find P(Rain=True and Cloudy=True)

In [37]:
rain_and_cloudy = [observation for observation in rain_true if observation['Cloudy'] == True]
answer = len(rain_and_cloudy) / len(rain_true)
print(answer)
0.7808764940239044

Rejection Sampling

Rejection Sampling is based on an idea similar to what we did just now. First, it generates samples from the prior distribution specified by the network. Then, it rejects all those that do not match the evidence.
Rejection sampling is advantageous only when we know the query beforehand. While prior sampling generally works for any query, it might fail in some scenarios.
Let's say we have a generic Bayesian network and we have evidence e, and we want to know how many times a state A is true, given evidence e is true. Normally, prior sampling can answer this question, but let's assume that the probability of evidence e being true in our actual probability distribution is very small. In this situation, it might be possible that sampling never encounters a data-point where e is true. If our sampled data has no instance of e being true, P(e) = 0, and therefore P(A | e) / P(e) = 0/0, which is undefined. We cannot find the required value using this sample.
We can definitely increase the number of sample points, but we can never guarantee that we will encounter the case where e is non-zero (assuming our actual probability distribution has atleast one case where e is true). To guarantee this, we would have to consider every single data point, which means we lose the speed advantage that approximation provides us and we essentially have to calculate the exact inference model of the Bayesian network.

Rejection sampling will be useful in this situation, as we already know the query.
While sampling from the network, we will reject any sample which is inconsistent with the evidence variables of the given query (in this example, the only evidence variable is e). We will only consider samples that do not violate any of the evidence variables. In this way, we will have enough data with the required evidence to infer queries involving a subset of that evidence.

The function rejection_sampling implements the algorithm described by Figure 14.14

In [38]:
psource(rejection_sampling)

def rejection_sampling(X, e, bn, N=10000):
    """Estimate the probability distribution of variable X given
    evidence e in BayesNet bn, using N samples.  [Figure 14.14]
    Raises a ZeroDivisionError if all the N samples are rejected,
    i.e., inconsistent with e.
    >>> random.seed(47)
    >>> rejection_sampling('Burglary', dict(JohnCalls=T, MaryCalls=T),
    ...   burglary, 10000).show_approx()
    'False: 0.7, True: 0.3'
    """
    counts = {x: 0 for x in bn.variable_values(X)}  # bold N in [Figure 14.14]
    for j in range(N):
        sample = prior_sample(bn)  # boldface x in [Figure 14.14]
        if consistent_with(sample, e):
            counts[sample[X]] += 1
    return ProbDist(X, counts)

The function keeps counts of each of the possible values of the Query variable and increases the count when we see an observation consistent with the evidence. It takes in input parameters X - The Query Variable, e - evidence, bn - Bayes net and N - number of prior samples to generate.

consistent_with is used to check consistency.

In [39]:
psource(consistent_with)

def consistent_with(event, evidence):
    """Is event consistent with the given evidence?"""
    return all(evidence.get(k, v) == v
               for k, v in event.items())

To answer P(Cloudy=True | Rain=True)

In [40]:
p = rejection_sampling('Cloudy', dict(Rain=True), sprinkler, 1000)
p[True]
Out[40]:
0.80859375

Likelihood Weighting

Rejection sampling takes a long time to run when the probability of finding consistent evidence is low. It is also slow for larger networks and more evidence variables. Rejection sampling tends to reject a lot of samples if our evidence consists of a large number of variables. Likelihood Weighting solves this by fixing the evidence (i.e. not sampling it) and then using weights to make sure that our overall sampling is still consistent.

The pseudocode in Figure 14.15 is implemented as likelihood_weighting and weighted_sample.

In [41]:
psource(weighted_sample)

def weighted_sample(bn, e):
    """Sample an event from bn that's consistent with the evidence e;
    return the event and its weight, the likelihood that the event
    accords to the evidence."""
    w = 1
    event = dict(e)  # boldface x in [Figure 14.15]
    for node in bn.nodes:
        Xi = node.variable
        if Xi in e:
            w *= node.p(e[Xi], event)
        else:
            event[Xi] = node.sample(event)
    return event, w

weighted_sample samples an event from Bayesian Network that's consistent with the evidence e and returns the event and its weight, the likelihood that the event accords to the evidence. It takes in two parameters bn the Bayesian Network and e the evidence.

The weight is obtained by multiplying P(xi | parents(xi)) for each node in evidence. We set the values of event = evidence at the start of the function.

In [42]:
weighted_sample(sprinkler, dict(Rain=True))
Out[42]:
({'Rain': True, 'Cloudy': False, 'Sprinkler': False, 'WetGrass': True}, 0.2)
In [43]:
psource(likelihood_weighting)

def likelihood_weighting(X, e, bn, N=10000):
    """Estimate the probability distribution of variable X given
    evidence e in BayesNet bn.  [Figure 14.15]
    >>> random.seed(1017)
    >>> likelihood_weighting('Burglary', dict(JohnCalls=T, MaryCalls=T),
    ...   burglary, 10000).show_approx()
    'False: 0.702, True: 0.298'
    """
    W = {x: 0 for x in bn.variable_values(X)}
    for j in range(N):
        sample, weight = weighted_sample(bn, e)  # boldface x, w in [Figure 14.15]
        W[sample[X]] += weight
    return ProbDist(X, W)

likelihood_weighting implements the algorithm to solve our inference problem. The code is similar to rejection_sampling but instead of adding one for each sample we add the weight obtained from weighted_sampling.

In [44]:
likelihood_weighting('Cloudy', dict(Rain=True), sprinkler, 200).show_approx()
Out[44]:
'False: 0.184, True: 0.816'

Gibbs Sampling

In likelihood sampling, it is possible to obtain low weights in cases where the evidence variables reside at the bottom of the Bayesian Network. This can happen because influence only propagates downwards in likelihood sampling.

Gibbs Sampling solves this. The implementation of Figure 14.16 is provided in the function gibbs_ask

In [45]:
psource(gibbs_ask)

def gibbs_ask(X, e, bn, N=1000):
    """[Figure 14.16]"""
    assert X not in e, "Query variable must be distinct from evidence"
    counts = {x: 0 for x in bn.variable_values(X)}  # bold N in [Figure 14.16]
    Z = [var for var in bn.variables if var not in e]
    state = dict(e)  # boldface x in [Figure 14.16]
    for Zi in Z:
        state[Zi] = random.choice(bn.variable_values(Zi))
    for j in range(N):
        for Zi in Z:
            state[Zi] = markov_blanket_sample(Zi, state, bn)
            counts[state[X]] += 1
    return ProbDist(X, counts)

In gibbs_ask we initialize the non-evidence variables to random values. And then select non-evidence variables and sample it from P(Variable | value in the current state of all remaining vars) repeatedly sample. In practice, we speed this up by using markov_blanket_sample instead. This works because terms not involving the variable get canceled in the calculation. The arguments for gibbs_ask are similar to likelihood_weighting

In [93]:
gibbs_ask('Cloudy', dict(Rain=True), sprinkler, 200).show_approx()
Out[93]:
'False: 0.19, True: 0.81'

Runtime analysis

Let's take a look at how much time each algorithm takes.

In [46]:
%%timeit
all_observations = [prior_sample(sprinkler) for x in range(1000)]
rain_true = [observation for observation in all_observations if observation['Rain'] == True]
len([observation for observation in rain_true if observation['Cloudy'] == True]) / len(rain_true)
7 ms ± 331 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [47]:
%%timeit
rejection_sampling('Cloudy', dict(Rain=True), sprinkler, 1000)
8.42 ms ± 336 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [48]:
%%timeit
likelihood_weighting('Cloudy', dict(Rain=True), sprinkler, 200)
1.38 ms ± 50.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [49]:
%%timeit
gibbs_ask('Cloudy', dict(Rain=True), sprinkler, 200)
6.51 ms ± 252 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

As expected, all algorithms have a very similar runtime. However, rejection sampling would be a lot faster and more accurate when the probabiliy of finding data-points consistent with the required evidence is small.
Likelihood weighting is the fastest out of all as it doesn't involve rejecting samples, but also has a quite high variance.

HIDDEN MARKOV MODELS

Often, we need to carry out probabilistic inference on temporal data or a sequence of observations where the order of observations matter. We require a model similar to a Bayesian Network, but one that grows over time to keep up with the latest evidences. If you are familiar with the mdp module or Markov models in general, you can probably guess that a Markov model might come close to representing our problem accurately.
A Markov model is basically a chain-structured Bayesian Network in which there is one state for each time step and each node has an identical probability distribution. The first node, however, has a different distribution, called the prior distribution which models the initial state of the process. A state in a Markov model depends only on the previous state and the latest evidence and not on the states before it.
A Hidden Markov Model or HMM is a special case of a Markov model in which the state of the process is described by a single discrete random variable. The possible values of the variable are the possible states of the world.
But what if we want to model a process with two or more state variables? In that case, we can still fit the process into the HMM framework by redefining our state variables as a single "megavariable". We do this because carrying out inference on HMMs have standard optimized algorithms. A HMM is very similar to an MDP, but we don't have the option of taking actions like in MDPs, instead, the process carries on as new evidence appears.
If a HMM is truncated at a fixed length, it becomes a Bayesian network and general BN inference can be used on it to answer queries.

Before we start, it will be helpful to understand the structure of a temporal model. We will use the example of the book with the guard and the umbrella. In this example, the state $\textbf{X}$ is whether it is a rainy day (X = True) or not (X = False) at Day $\textbf{t}$. In the sensor or observation model, the observation or evidence $\textbf{U}$ is whether the professor holds an umbrella (U = True) or not (U = False) on Day $\textbf{t}$. Based on that, the transition model is

$X_{t-1}$ $X_{t}$ P$(X_{t} X_{t-1})$
${False}$ ${False}$ 0.7
${False}$ ${True}$ 0.3
${True}$ ${False}$ 0.3
${True}$ ${True}$ 0.7

And the the sensor model will be,

$X_{t}$ $U_{t}$ P$(U_{t} X_{t})$
${False}$ ${True}$ 0.2
${False}$ ${False}$ 0.8
${True}$ ${True}$ 0.9
${True}$ ${False}$ 0.1

HMMs are implemented in the HiddenMarkovModel class. Let's have a look.

In [50]:
psource(HiddenMarkovModel)

class HiddenMarkovModel:
    """A Hidden markov model which takes Transition model and Sensor model as inputs"""

    def __init__(self, transition_model, sensor_model, prior=None):
        self.transition_model = transition_model
        self.sensor_model = sensor_model
        self.prior = prior or [0.5, 0.5]

    def sensor_dist(self, ev):
        if ev is True:
            return self.sensor_model[0]
        else:
            return self.sensor_model[1]

We instantiate the object hmm of the class using a list of lists for both the transition and the sensor model.

In [51]:
umbrella_transition_model = [[0.7, 0.3], [0.3, 0.7]]
umbrella_sensor_model = [[0.9, 0.2], [0.1, 0.8]]
hmm = HiddenMarkovModel(umbrella_transition_model, umbrella_sensor_model)

The sensor_dist() method returns a list with the conditional probabilities of the sensor model.

In [52]:
hmm.sensor_dist(ev=True)
Out[52]:
[0.9, 0.2]

Now that we have defined an HMM object, our task here is to compute the belief $B_{t}(x)= P(X_{t}|U_{1:t})$ given evidence U at each time step t.
The basic inference tasks that must be solved are:

  1. Filtering: Computing the posterior probability distribution over the most recent state, given all the evidence up to the current time step.
  2. Prediction: Computing the posterior probability distribution over the future state.
  3. Smoothing: Computing the posterior probability distribution over a past state. Smoothing provides a better estimation as it incorporates more evidence.
  4. Most likely explanation: Finding the most likely sequence of states for a given observation
  5. Learning: The transition and sensor models can be learnt, if not yet known, just like in an information gathering agent

There are three primary methods to carry out inference in Hidden Markov Models:

  1. The Forward-Backward algorithm
  2. Fixed lag smoothing
  3. Particle filtering

Let's have a look at how we can carry out inference and answer queries based on our umbrella HMM using these algorithms.

FORWARD-BACKWARD

This is a general algorithm that works for all Markov models, not just HMMs. In the filtering task (inference) we are given evidence U in each time t and we want to compute the belief $B_{t}(x)= P(X_{t}|U_{1:t})$. We can think of it as a three step process:

  1. In every step we start with the current belief $P(X_{t}|e_{1:t})$
  2. We update it for time
  3. We update it for evidence

The forward algorithm performs the step 2 and 3 at once. It updates, or better say reweights, the initial belief using the transition and the sensor model. Let's see the umbrella example. On Day 0 no observation is available, and for that reason we will assume that we have equal possibilities to rain or not. In the HiddenMarkovModel class, the prior probabilities for Day 0 are by default [0.5, 0.5].

The observation update is calculated with the forward() function. Basically, we update our belief using the observation model. The function returns a list with the probabilities of raining or not on Day 1.

In [53]:
psource(forward)

def forward(HMM, fv, ev):
    prediction = vector_add(scalar_vector_product(fv[0], HMM.transition_model[0]),
                            scalar_vector_product(fv[1], HMM.transition_model[1]))
    sensor_dist = HMM.sensor_dist(ev)

    return normalize(element_wise_product(sensor_dist, prediction))
In [54]:
umbrella_prior = [0.5, 0.5]
belief_day_1 = forward(hmm, umbrella_prior, ev=True)
print ('The probability of raining on day 1 is {:.2f}'.format(belief_day_1[0]))
The probability of raining on day 1 is 0.82

In Day 2 our initial belief is the updated belief of Day 1. Again using the forward() function we can compute the probability of raining in Day 2

In [55]:
belief_day_2 = forward(hmm, belief_day_1, ev=True)
print ('The probability of raining in day 2 is {:.2f}'.format(belief_day_2[0]))
The probability of raining in day 2 is 0.88

In the smoothing part we are interested in computing the distribution over past states given evidence up to the present. Assume that we want to compute the distribution for the time k, for $0\leq k<t $, the computation can be divided in two parts:

  1. The forward message will be computed till and by filtering forward from 1 to k.
  2. The backward message can be computed by a recusive process that runs from k to t.

Rather than starting at time 1, the algorithm starts at time t. In the umbrella example, we can compute the backward message from Day 2 to Day 1 by using the backward function. The backward function has as parameters the object created by the HiddenMarkovModel class, the evidence in Day 2 (in our case is True), and the initial probabilities of being in state in time t+1. Since no observation is available then it will be [1, 1]. The backward function will return a list with the conditional probabilities.

In [56]:
psource(backward)

def backward(HMM, b, ev):
    sensor_dist = HMM.sensor_dist(ev)
    prediction = element_wise_product(sensor_dist, b)

    return normalize(vector_add(scalar_vector_product(prediction[0], HMM.transition_model[0]),
                                scalar_vector_product(prediction[1], HMM.transition_model[1])))
In [57]:
b = [1, 1]
backward(hmm, b, ev=True)
Out[57]:
[0.6272727272727272, 0.37272727272727274]

Some may notice that the result is not the same as in the book. The main reason is that in the book the normalization step is not used. If we want to normalize the result, one can use the normalize() helper function.

In order to find the smoothed estimate for raining in Day k, we will use the forward_backward() function. As in the example in the book, the umbrella is observed in both days and the prior distribution is [0.5, 0.5]

In [58]:
pseudocode('Forward-Backward')
Out[58]:

AIMA3e

function FORWARD-BACKWARD(ev, prior) returns a vector of probability distributions
inputs: ev, a vector of evidence values for steps 1,…,t
     prior, the prior distribution on the initial state, P(X0)
local variables: fv, a vector of forward messages for steps 0,…,t
        b, a representation of the backward message, initially all 1s
        sv, a vector of smoothed estimates for steps 1,…,t

fv[0] ← prior
for i = 1 to t do
   fv[i] ← FORWARD(fv[i − 1], ev[i])
for i = t downto 1 do
   sv[i] ← NORMALIZE(fv[i] × b)
   b ← BACKWARD(b, ev[i])
return sv


Figure ?? The forward-backward algorithm for smoothing: computing posterior probabilities of a sequence of states given a sequence of observations. The FORWARD and BACKWARD operators are defined by Equations (??) and (??), respectively.

In [59]:
umbrella_prior = [0.5, 0.5]
prob = forward_backward(hmm, ev=[T, T], prior=umbrella_prior)
print ('The probability of raining in Day 0 is {:.2f} and in Day 1 is {:.2f}'.format(prob[0][0], prob[1][0]))
The probability of raining in Day 0 is 0.65 and in Day 1 is 0.88

Since HMMs are represented as single variable systems, we can represent the transition model and sensor model as matrices. The forward_backward algorithm can be easily carried out on this representation (as we have done here) with a time complexity of $O({S}^{2} t)$ where t is the length of the sequence and each step multiplies a vector of size $S$ with a matrix of dimensions $SxS$.
Additionally, the forward pass stores $t$ vectors of size $S$ which makes the auxiliary space requirement equivalent to $O(St)$.

Is there any way we can improve the time or space complexity?
Fortunately, the matrix representation of HMM properties allows us to do so.
If $f$ and $b$ represent the forward and backward messages respectively, we can modify the smoothing algorithm by first running the standard forward pass to compute $f_{t:t}$ (forgetting all the intermediate results) and then running backward pass for both $b$ and $f$ together, using them to compute the smoothed estimate at each step. This optimization reduces auxlilary space requirement to constant (irrespective of the length of the sequence) provided the transition matrix is invertible and the sensor model has no zeros (which is sometimes hard to accomplish)

Let's look at another algorithm, that carries out smoothing in a more optimized way.

In [ ]: