{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## April 3 - More Learning - Statistics and Naive Bayes\n", "\n", "Mostly chapter 18 and 20 from AIMA" ] }, { "cell_type": "code", "execution_count": 235, "metadata": {}, "outputs": [], "source": [ "from learning import *\n", "from notebook import *" ] }, { "cell_type": "code", "execution_count": 236, "metadata": {}, "outputs": [], "source": [ "iris = DataSet(name=\"iris\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## NAIVE BAYES LEARNER\n", "\n", "### Overview\n", "\n", "#### Theory of Probabilities\n", "\n", "The Naive Bayes algorithm is a probabilistic classifier, making use of [Bayes' Theorem](https://en.wikipedia.org/wiki/Bayes%27_theorem). The theorem states that the conditional probability of A given B equals the conditional probability of B given A multiplied by the probability of A, divided by the probability of B.\n", "\n", "$$P(A|B) = \\dfrac{P(B|A)*P(A)}{P(B)}$$\n", "\n", "From the theory of Probabilities we have the Multiplication Rule, if the events *X* are independent the following is true:\n", "\n", "$$P(X_{1} \\cap X_{2} \\cap ... \\cap X_{n}) = P(X_{1})*P(X_{2})*...*P(X_{n})$$\n", "\n", "For conditional probabilities this becomes:\n", "\n", "$$P(X_{1}, X_{2}, ..., X_{n}|Y) = P(X_{1}|Y)*P(X_{2}|Y)*...*P(X_{n}|Y)$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Classifying an Item\n", "\n", "How can we use the above to classify an item though?\n", "\n", "We have a dataset with a set of classes (C) and we want to classify an item with a set of features (F). Essentially what we want to do is predict the class of an item given the features.\n", "\n", "For a specific class, Class, we will find the conditional probability given the item features:\n", "\n", "$$P(Class|F) = \\dfrac{P(F|Class)*P(Class)}{P(F)}$$\n", "\n", "We will do this for every class and we will pick the maximum. This will be the class the item is classified in.\n", "\n", "The features though are a vector with many elements. We need to break the probabilities up using the multiplication rule. Thus the above equation becomes:\n", "\n", "$$P(Class|F) = \\dfrac{P(Class)*P(F_{1}|Class)*P(F_{2}|Class)*...*P(F_{n}|Class)}{P(F_{1})*P(F_{2})*...*P(F_{n})}$$\n", "\n", "The calculation of the conditional probability then depends on the calculation of the following:\n", "\n", "*a)* The probability of Class in the dataset.\n", "\n", "*b)* The conditional probability of each feature occurring in an item classified in Class.\n", "\n", "*c)* The probabilities of each individual feature.\n", "\n", "For *a)*, we will count how many times Class occurs in the dataset (aka how many items are classified in a particular class).\n", "\n", "For *b)*, if the feature values are discrete ('Blue', '3', 'Tall', etc.), we will count how many times a feature value occurs in items of each class. If the feature values are not discrete, we will go a different route. We will use a distribution function to calculate the probability of values for a given class and feature. If we know the distribution function of the dataset, then great, we will use it to compute the probabilities. If we don't know the function, we can assume the dataset follows the normal (Gaussian) distribution without much loss of accuracy. In fact, it can be proven that any distribution tends to the Gaussian the larger the population gets (see [Central Limit Theorem](https://en.wikipedia.org/wiki/Central_limit_theorem)).\n", "\n", "*NOTE:* If the values are continuous but use the discrete approach, there might be issues if we are not lucky. For one, if we have two values, '5.0 and 5.1', with the discrete approach they will be two completely different values, despite being so close. Second, if we are trying to classify an item with a feature value of '5.15', if the value does not appear for the feature, its probability will be 0. This might lead to misclassification. Generally, the continuous approach is more accurate and more useful, despite the overhead of calculating the distribution function.\n", "\n", "The last one, *c)*, is tricky. If feature values are discrete, we can count how many times they occur in the dataset. But what if the feature values are continuous? Imagine a dataset with a height feature. Is it worth it to count how many times each value occurs? Most of the time it is not, since there can be miscellaneous differences in the values (for example, 1.7 meters and 1.700001 meters are practically equal, but they count as different values).\n", "\n", "So as we cannot calculate the feature value probabilities, what are we going to do?\n", "\n", "Let's take a step back and rethink exactly what we are doing. We are essentially comparing conditional probabilities of all the classes. For two classes, A and B, we want to know which one is greater:\n", "\n", "$$\\dfrac{P(F|A)*P(A)}{P(F)} vs. \\dfrac{P(F|B)*P(B)}{P(F)}$$\n", "\n", "Wait, P(F) is the same for both the classes! In fact, it is the same for every combination of classes. That is because P(F) does not depend on a class, thus being independent of the classes.\n", "\n", "So, for *c)*, we actually don't need to calculate it at all." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Wrapping It Up\n", "\n", "Classifying an item to a class then becomes a matter of calculating the conditional probabilities of feature values and the probabilities of classes. This is something very desirable and computationally delicious.\n", "\n", "Remember though that all the above are true because we made the assumption that the features are independent. In most real-world cases that is not true though. Is that an issue here? Fret not, for the the algorithm is very efficient even with that assumption. That is why the algorithm is called **Naive** Bayes Classifier. We (naively) assume that the features are independent to make computations easier." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Implementation\n", "\n", "The implementation of the Naive Bayes Classifier is split in two; *Learning* and *Simple*. The *learning* classifier takes as input a dataset and learns the needed distributions from that. It is itself split into two, for discrete and continuous features. The *simple* classifier takes as input not a dataset, but already calculated distributions (a dictionary of `CountingProbDist` objects)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Discrete\n", "\n", "The implementation for discrete values counts how many times each feature value occurs for each class, and how many times each class occurs. The results are stored in a `CountinProbDist` object." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the below code you can see the probabilities of the class \"Setosa\" appearing in the dataset and the probability of the first feature (at index 0) of the same class having a value of 5. Notice that the second probability is relatively small, even though if we observe the dataset we will find that a lot of values are around 5. The issue arises because the features in the Iris dataset are continuous, and we are assuming they are discrete. If the features were discrete (for example, \"Tall\", \"3\", etc.) this probably wouldn't have been the case and we would see a much nicer probability distribution." ] }, { "cell_type": "code", "execution_count": 237, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "
\n", "class CountingProbDist:\n",
" """A probability distribution formed by observing and counting examples.\n",
" If p is an instance of this class and o is an observed value, then\n",
" there are 3 main operations:\n",
" p.add(o) increments the count for observation o by 1.\n",
" p.sample() returns a random element from the distribution.\n",
" p[o] returns the probability for o (as in a regular ProbDist)."""\n",
"\n",
" def __init__(self, observations=None, default=0):\n",
" """Create a distribution, and optionally add in some observations.\n",
" By default this is an unsmoothed distribution, but saying default=1,\n",
" for example, gives you add-one smoothing."""\n",
" if observations is None:\n",
" observations = []\n",
" self.dictionary = {}\n",
" self.n_obs = 0\n",
" self.default = default\n",
" self.sampler = None\n",
"\n",
" for o in observations:\n",
" self.add(o)\n",
"\n",
" def add(self, o):\n",
" """Add an observation o to the distribution."""\n",
" self.smooth_for(o)\n",
" self.dictionary[o] += 1\n",
" self.n_obs += 1\n",
" self.sampler = None\n",
"\n",
" def smooth_for(self, o):\n",
" """Include o among the possible observations, whether or not\n",
" it's been observed yet."""\n",
" if o not in self.dictionary:\n",
" self.dictionary[o] = self.default\n",
" self.n_obs += self.default\n",
" self.sampler = None\n",
"\n",
" def __getitem__(self, item):\n",
" """Return an estimate of the probability of item."""\n",
" self.smooth_for(item)\n",
" return self.dictionary[item] / self.n_obs\n",
"\n",
" # (top() and sample() are not used in this module, but elsewhere.)\n",
"\n",
" def top(self, n):\n",
" """Return (count, obs) tuples for the n most frequent observations."""\n",
" return heapq.nlargest(n, [(v, k) for (k, v) in self.dictionary.items()])\n",
"\n",
" def sample(self):\n",
" """Return a random sample from the distribution."""\n",
" if self.sampler is None:\n",
" self.sampler = weighted_sampler(list(self.dictionary.keys()),\n",
" list(self.dictionary.values()))\n",
" return self.sampler()\n",
"
def NaiveBayesLearner(dataset, continuous=True, simple=False):\n",
" if simple:\n",
" return NaiveBayesSimple(dataset)\n",
" if continuous:\n",
" return NaiveBayesContinuous(dataset)\n",
" else:\n",
" return NaiveBayesDiscrete(dataset)\n",
"
def NaiveBayesSimple(distribution):\n",
" """A simple naive bayes classifier that takes as input a dictionary of\n",
" CountingProbDist objects and classifies items according to these distributions.\n",
" The input dictionary is in the following form:\n",
" (ClassName, ClassProb): CountingProbDist"""\n",
" target_dist = {c_name: prob for c_name, prob in distribution.keys()}\n",
" attr_dists = {c_name: count_prob for (c_name, _), count_prob in distribution.items()}\n",
"\n",
" def predict(example):\n",
" """Predict the target value for example. Calculate probabilities for each\n",
" class and pick the max."""\n",
" def class_probability(targetval):\n",
" attr_dist = attr_dists[targetval]\n",
" return target_dist[targetval] * product(attr_dist[a] for a in example)\n",
"\n",
" return argmax(target_dist.keys(), key=class_probability)\n",
"\n",
" return predict\n",
"
def NaiveBayesDiscrete(dataset):\n",
" """Just count how many times each value of each input attribute\n",
" occurs, conditional on the target value. Count the different\n",
" target values too."""\n",
"\n",
" target_vals = dataset.values[dataset.target]\n",
" target_dist = CountingProbDist(target_vals)\n",
" attr_dists = {(gv, attr): CountingProbDist(dataset.values[attr])\n",
" for gv in target_vals\n",
" for attr in dataset.inputs}\n",
" for example in dataset.examples:\n",
" targetval = example[dataset.target]\n",
" target_dist.add(targetval)\n",
" for attr in dataset.inputs:\n",
" attr_dists[targetval, attr].add(example[attr])\n",
"\n",
" def predict(example):\n",
" """Predict the target value for example. Consider each possible value,\n",
" and pick the most likely by looking at each attribute independently."""\n",
" def class_probability(targetval):\n",
" return (target_dist[targetval] *\n",
" product(attr_dists[targetval, attr][example[attr]]\n",
" for attr in dataset.inputs))\n",
" return argmax(target_vals, key=class_probability)\n",
"\n",
" return predict\n",
"
class DataSet:\n",
" """A data set for a machine learning problem. It has the following fields:\n",
"\n",
" d.examples A list of examples. Each one is a list of attribute values.\n",
" d.attrs A list of integers to index into an example, so example[attr]\n",
" gives a value. Normally the same as range(len(d.examples[0])).\n",
" d.attrnames Optional list of mnemonic names for corresponding attrs.\n",
" d.target The attribute that a learning algorithm will try to predict.\n",
" By default the final attribute.\n",
" d.inputs The list of attrs without the target.\n",
" d.values A list of lists: each sublist is the set of possible\n",
" values for the corresponding attribute. If initially None,\n",
" it is computed from the known examples by self.setproblem.\n",
" If not None, an erroneous value raises ValueError.\n",
" d.distance A function from a pair of examples to a nonnegative number.\n",
" Should be symmetric, etc. Defaults to mean_boolean_error\n",
" since that can handle any field types.\n",
" d.name Name of the data set (for output display only).\n",
" d.source URL or other source where the data came from.\n",
" d.exclude A list of attribute indexes to exclude from d.inputs. Elements\n",
" of this list can either be integers (attrs) or attrnames.\n",
"\n",
" Normally, you call the constructor and you're done; then you just\n",
" access fields like d.examples and d.target and d.inputs."""\n",
"\n",
" def __init__(self, examples=None, attrs=None, attrnames=None, target=-1,\n",
" inputs=None, values=None, distance=mean_boolean_error,\n",
" name='', source='', exclude=()):\n",
" """Accepts any of DataSet's fields. Examples can also be a\n",
" string or file from which to parse examples using parse_csv.\n",
" Optional parameter: exclude, as documented in .setproblem().\n",
" >>> DataSet(examples='1, 2, 3')\n",
" <DataSet(): 1 examples, 3 attributes>\n",
" """\n",
" self.name = name\n",
" self.source = source\n",
" self.values = values\n",
" self.distance = distance\n",
" self.got_values_flag = bool(values)\n",
"\n",
" # Initialize .examples from string or list or data directory\n",
" if isinstance(examples, str):\n",
" self.examples = parse_csv(examples)\n",
" elif examples is None:\n",
" self.examples = parse_csv(open_data(name + '.csv').read())\n",
" else:\n",
" self.examples = examples\n",
"\n",
" # Attrs are the indices of examples, unless otherwise stated. \n",
" if self.examples is not None and attrs is None:\n",
" attrs = list(range(len(self.examples[0])))\n",
"\n",
" self.attrs = attrs\n",
"\n",
" # Initialize .attrnames from string, list, or by default\n",
" if isinstance(attrnames, str):\n",
" self.attrnames = attrnames.split()\n",
" else:\n",
" self.attrnames = attrnames or attrs\n",
" self.setproblem(target, inputs=inputs, exclude=exclude)\n",
"\n",
" def setproblem(self, target, inputs=None, exclude=()):\n",
" """Set (or change) the target and/or inputs.\n",
" This way, one DataSet can be used multiple ways. inputs, if specified,\n",
" is a list of attributes, or specify exclude as a list of attributes\n",
" to not use in inputs. Attributes can be -n .. n, or an attrname.\n",
" Also computes the list of possible values, if that wasn't done yet."""\n",
" self.target = self.attrnum(target)\n",
" exclude = list(map(self.attrnum, exclude))\n",
" if inputs:\n",
" self.inputs = removeall(self.target, inputs)\n",
" else:\n",
" self.inputs = [a for a in self.attrs\n",
" if a != self.target and a not in exclude]\n",
" if not self.values:\n",
" self.update_values()\n",
" self.check_me()\n",
"\n",
" def check_me(self):\n",
" """Check that my fields make sense."""\n",
" assert len(self.attrnames) == len(self.attrs)\n",
" assert self.target in self.attrs\n",
" assert self.target not in self.inputs\n",
" assert set(self.inputs).issubset(set(self.attrs))\n",
" if self.got_values_flag:\n",
" # only check if values are provided while initializing DataSet\n",
" list(map(self.check_example, self.examples))\n",
"\n",
" def add_example(self, example):\n",
" """Add an example to the list of examples, checking it first."""\n",
" self.check_example(example)\n",
" self.examples.append(example)\n",
"\n",
" def check_example(self, example):\n",
" """Raise ValueError if example has any invalid values."""\n",
" if self.values:\n",
" for a in self.attrs:\n",
" if example[a] not in self.values[a]:\n",
" raise ValueError('Bad value {} for attribute {} in {}'\n",
" .format(example[a], self.attrnames[a], example))\n",
"\n",
" def attrnum(self, attr):\n",
" """Returns the number used for attr, which can be a name, or -n .. n-1."""\n",
" if isinstance(attr, str):\n",
" return self.attrnames.index(attr)\n",
" elif attr < 0:\n",
" return len(self.attrs) + attr\n",
" else:\n",
" return attr\n",
"\n",
" def update_values(self):\n",
" self.values = list(map(unique, zip(*self.examples)))\n",
"\n",
" def sanitize(self, example):\n",
" """Return a copy of example, with non-input attributes replaced by None."""\n",
" return [attr_i if i in self.inputs else None\n",
" for i, attr_i in enumerate(example)]\n",
"\n",
" def classes_to_numbers(self, classes=None):\n",
" """Converts class names to numbers."""\n",
" if not classes:\n",
" # If classes were not given, extract them from values\n",
" classes = sorted(self.values[self.target])\n",
" for item in self.examples:\n",
" item[self.target] = classes.index(item[self.target])\n",
"\n",
" def remove_examples(self, value=''):\n",
" """Remove examples that contain given value."""\n",
" self.examples = [x for x in self.examples if value not in x]\n",
" self.update_values()\n",
"\n",
" def split_values_by_classes(self):\n",
" """Split values into buckets according to their class."""\n",
" buckets = defaultdict(lambda: [])\n",
" target_names = self.values[self.target]\n",
"\n",
" for v in self.examples:\n",
" item = [a for a in v if a not in target_names] # Remove target from item\n",
" buckets[v[self.target]].append(item) # Add item to bucket of its class\n",
"\n",
" return buckets\n",
"\n",
" def find_means_and_deviations(self):\n",
" """Finds the means and standard deviations of self.dataset.\n",
" means : A dictionary for each class/target. Holds a list of the means\n",
" of the features for the class.\n",
" deviations: A dictionary for each class/target. Holds a list of the sample\n",
" standard deviations of the features for the class."""\n",
" target_names = self.values[self.target]\n",
" feature_numbers = len(self.inputs)\n",
"\n",
" item_buckets = self.split_values_by_classes()\n",
"\n",
" means = defaultdict(lambda: [0] * feature_numbers)\n",
" deviations = defaultdict(lambda: [0] * feature_numbers)\n",
"\n",
" for t in target_names:\n",
" # Find all the item feature values for item in class t\n",
" features = [[] for i in range(feature_numbers)]\n",
" for item in item_buckets[t]:\n",
" for i in range(feature_numbers):\n",
" features[i].append(item[i])\n",
"\n",
" # Calculate means and deviations fo the class\n",
" for i in range(feature_numbers):\n",
" means[t][i] = mean(features[i])\n",
" deviations[t][i] = stdev(features[i])\n",
"\n",
" return means, deviations\n",
"\n",
" def __repr__(self):\n",
" return '<DataSet({}): {:d} examples, {:d} attributes>'.format(\n",
" self.name, len(self.examples), len(self.attrs))\n",
"
def NaiveBayesContinuous(dataset):\n",
" """Count how many times each target value occurs.\n",
" Also, find the means and deviations of input attribute values for each target value."""\n",
" means, deviations = dataset.find_means_and_deviations()\n",
"\n",
" target_vals = dataset.values[dataset.target]\n",
" target_dist = CountingProbDist(target_vals)\n",
"\n",
" def predict(example):\n",
" """Predict the target value for example. Consider each possible value,\n",
" and pick the most likely by looking at each attribute independently."""\n",
" def class_probability(targetval):\n",
" prob = target_dist[targetval]\n",
" for attr in dataset.inputs:\n",
" prob *= gaussian(means[targetval][attr], deviations[targetval][attr], example[attr])\n",
" return prob\n",
"\n",
" return argmax(target_vals, key=class_probability)\n",
"\n",
" return predict\n",
"
def network(input_units, hidden_layer_sizes, output_units, activation=sigmoid):\n",
" """Create Directed Acyclic Network of given number layers.\n",
" hidden_layers_sizes : List number of neuron units in each hidden layer\n",
" excluding input and output layers\n",
" """\n",
" layers_sizes = [input_units] + hidden_layer_sizes + [output_units]\n",
"\n",
" net = [[NNUnit(activation) for n in range(size)]\n",
" for size in layers_sizes]\n",
" n_layers = len(net)\n",
"\n",
" # Make Connection\n",
" for i in range(1, n_layers):\n",
" for n in net[i]:\n",
" for k in net[i-1]:\n",
" n.inputs.append(k)\n",
" n.weights.append(0)\n",
" return net\n",
"
class NNUnit:\n",
" """Single Unit of Multiple Layer Neural Network\n",
" inputs: Incoming connections\n",
" weights: Weights to incoming connections\n",
" """\n",
"\n",
" def __init__(self, activation=sigmoid, weights=None, inputs=None):\n",
" self.weights = weights or []\n",
" self.inputs = inputs or []\n",
" self.value = None\n",
" self.activation = activation\n",
"
def PerceptronLearner(dataset, learning_rate=0.01, epochs=100):\n",
" """Logistic Regression, NO hidden layer"""\n",
" i_units = len(dataset.inputs)\n",
" o_units = len(dataset.values[dataset.target])\n",
" hidden_layer_sizes = []\n",
" raw_net = network(i_units, hidden_layer_sizes, o_units)\n",
" learned_net = BackPropagationLearner(dataset, raw_net, learning_rate, epochs)\n",
"\n",
" def predict(example):\n",
" o_nodes = learned_net[1]\n",
"\n",
" # Forward pass\n",
" for node in o_nodes:\n",
" in_val = dotproduct(example, node.weights)\n",
" node.value = node.activation(in_val)\n",
"\n",
" # Hypothesis\n",
" return find_max_node(o_nodes)\n",
"\n",
" return predict\n",
"
def LinearLearner(dataset, learning_rate=0.01, epochs=100):\n",
" """Define with learner = LinearLearner(data); infer with learner(x)."""\n",
" idx_i = dataset.inputs\n",
" idx_t = dataset.target # As of now, dataset.target gives only one index.\n",
" examples = dataset.examples\n",
" num_examples = len(examples)\n",
"\n",
" # X transpose\n",
" X_col = [dataset.values[i] for i in idx_i] # vertical columns of X\n",
"\n",
" # Add dummy\n",
" ones = [1 for _ in range(len(examples))]\n",
" X_col = [ones] + X_col\n",
"\n",
" # Initialize random weigts\n",
" num_weights = len(idx_i) + 1\n",
" w = random_weights(min_value=-0.5, max_value=0.5, num_weights=num_weights)\n",
"\n",
" for epoch in range(epochs):\n",
" err = []\n",
" # Pass over all examples\n",
" for example in examples:\n",
" x = [1] + example\n",
" y = dotproduct(w, x)\n",
" t = example[idx_t]\n",
" err.append(t - y)\n",
"\n",
" # update weights\n",
" for i in range(len(w)):\n",
" w[i] = w[i] + learning_rate * (dotproduct(err, X_col[i]) / num_examples)\n",
"\n",
" def predict(example):\n",
" x = [1] + example\n",
" return dotproduct(w, x)\n",
" return predict\n",
"
def EnsembleLearner(learners):\n",
" """Given a list of learning algorithms, have them vote."""\n",
" def train(dataset):\n",
" predictors = [learner(dataset) for learner in learners]\n",
"\n",
" def predict(example):\n",
" return mode(predictor(example) for predictor in predictors)\n",
" return predict\n",
" return train\n",
"
\n", "One way to think about this is that you have more to learn from your mistakes than\n", "from your success. If you predict X and are correct, you have nothing to learn.\n", "\n", "\n", "\n", "All the examples start with equal weights and a hypothesis is generated using these examples. Examples which are incorrectly classified, their weights are increased so that they can be classified correctly by the next hypothesis. The examples that are correctly classified, their weights are reduced. This process is repeated K times (here K is an input to the algorithm) and hence, K hypotheses are generated.\n", "\n", "These K hypotheses are also assigned weights according to their performance on the weighted training set. The final ensemble hypothesis is the weighted-majority combination of these K hypotheses.\n", "\n", "The speciality of AdaBoost is that by using weak learners and a sufficiently large *K*, a highly accurate classifier can be learned irrespective of the complexity of the function being learned or the dullness of the hypothesis space.\n", "\n", "### Implementation\n", "\n", "As seen in the previous section, the `PerceptronLearner` does not perform that well on the iris dataset. We'll use perceptron as the learner for the AdaBoost algorithm and try to increase the accuracy. \n", "\n", "Let's first see what AdaBoost is exactly:" ] }, { "cell_type": "code", "execution_count": 230, "metadata": {}, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", "\n", "If you predict X and are wrong, you made a mistake. You need to update your model. You have something to learn.\n", " \n", "In the cognitive science world, this is known as failure-driven learning.\n", "It is a powerful notion.\n", "\n", "Think about it. How do you know that you have something to learn? You make a \n", "mistake. You expected one outcome and something else happened.\n", "\n", "Actually, you do not need to make a mistake. Suppose you try something and expect\n", "to fail. If you actually succeed, you still have something to learn because \n", "you had an **expectation failure**.\n", " \n", "
def AdaBoost(L, K):\n",
" """[Figure 18.34]"""\n",
"\n",
" def train(dataset):\n",
" examples, target = dataset.examples, dataset.target\n",
" N = len(examples)\n",
" epsilon = 1/(2*N)\n",
" w = [1/N]*N\n",
" h, z = [], []\n",
" for k in range(K):\n",
" h_k = L(dataset, w)\n",
" h.append(h_k)\n",
" error = sum(weight for example, weight in zip(examples, w)\n",
" if example[target] != h_k(example))\n",
"\n",
" # Avoid divide-by-0 from either 0% or 100% error rates:\n",
" error = clip(error, epsilon, 1 - epsilon)\n",
" for j, example in enumerate(examples):\n",
" if example[target] == h_k(example):\n",
" w[j] *= error/(1 - error)\n",
" w = normalize(w)\n",
" z.append(math.log((1 - error)/error))\n",
" return WeightedMajority(h, z)\n",
" return train\n",
"
def WeightedLearner(unweighted_learner):\n",
" """Given a learner that takes just an unweighted dataset, return\n",
" one that takes also a weight for each example. [p. 749 footnote 14]"""\n",
" def train(dataset, weights):\n",
" return unweighted_learner(replicated_dataset(dataset, weights))\n",
" return train\n",
"
def err_ratio(predict, dataset, examples=None, verbose=0):\n",
" """Return the proportion of the examples that are NOT correctly predicted.\n",
" verbose - 0: No output; 1: Output wrong; 2 (or greater): Output correct"""\n",
" examples = examples or dataset.examples\n",
" if len(examples) == 0:\n",
" return 0.0\n",
" right = 0\n",
" for example in examples:\n",
" desired = example[dataset.target]\n",
" output = predict(dataset.sanitize(example))\n",
" if output == desired:\n",
" right += 1\n",
" if verbose >= 2:\n",
" print(' OK: got {} for {}'.format(desired, example))\n",
" elif verbose:\n",
" print('WRONG: got {}, expected {} for {}'.format(\n",
" output, desired, example))\n",
" return 1 - (right/len(examples))\n",
"