{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# KNOWLEDGE\n", "\n", "The [knowledge](https://github.com/aimacode/aima-python/blob/master/knowledge.py) module covers **Chapter 19: Knowledge in Learning** from Stuart Russel's and Peter Norvig's book *Artificial Intelligence: A Modern Approach*.\n", "\n", "Execute the cell below to get started." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from knowledge import *\n", "\n", "from notebook import pseudocode, psource" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## CONTENTS\n", "\n", "* Overview\n", "* Current-Best Learning" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## OVERVIEW\n", "\n", "Like the [learning module](https://github.com/aimacode/aima-python/blob/master/learning.ipynb), this chapter focuses on methods for generating a model/hypothesis for a domain; however, unlike the learning chapter, here we use prior knowledge to help us learn from new experiences and find a proper hypothesis.\n", "\n", "### First-Order Logic\n", "\n", "Usually knowledge in this field is represented as **first-order logic**; a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called **goal predicate**, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples.\n", "\n", "### Representation\n", "\n", "In this module, we use dictionaries to represent examples, with keys being the attribute names and values being the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions.\n", "\n", "For example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example:\n", "\n", "`{'Species': 'Cat', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}`\n", "\n", "A hypothesis can be the following:\n", "\n", "`[{'Species': 'Cat'}]`\n", "\n", "which means an animal will take an umbrella if and only if it is a cat.\n", "\n", "### Consistency\n", "\n", "We say that an example `e` is **consistent** with an hypothesis `h` if the assignment from the hypothesis for `e` is the same as `e['GOAL']`. If the above example and hypothesis are `e` and `h` respectively, then `e` is consistent with `h` since `e['Species'] == 'Cat'`. For `e = {'Species': 'Dog', 'Coat': 'Yes', 'Rain': 'Yes', 'GOAL': True}`, the example is no longer consistent with `h`, since the value assigned to `e` is *False* while `e['GOAL']` is *True*." ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## CURRENT-BEST LEARNING\n", "\n", "### Overview\n", "\n", "In **Current-Best Learning**, we start with a hypothesis and we refine it as we iterate through the examples. For each example, there are three possible outcomes: the example is consistent with the hypothesis, the example is a **false positive** (real value is false but got predicted as true) and the example is a **false negative** (real value is true but got predicted as false). Depending on the outcome we refine the hypothesis accordingly:\n", "\n", "* Consistent: We do not change the hypothesis and move on to the next example.\n", "\n", "* False Positive: We **specialize** the hypothesis, which means we add a conjunction.\n", "\n", "* False Negative: We **generalize** the hypothesis, either by removing a conjunction or a disjunction, or by adding a disjunction.\n", "\n", "When specializing or generalizing, we should make sure to not create inconsistencies with previous examples. To avoid that caveat, backtracking is needed. Thankfully, there is not just one specialization or generalization, so we have a lot to choose from. We will go through all the specializations/generalizations and we will refine our hypothesis as the first specialization/generalization consistent with all the examples seen up to that point." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Pseudocode" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### AIMA3e\n", "__function__ Current-Best-Learning(_examples_, _h_) __returns__ a hypothesis or fail \n", " __if__ _examples_ is empty __then__ \n", "   __return__ _h_ \n", " _e_ ← First(_examples_) \n", " __if__ _e_ is consistent with _h_ __then__ \n", "   __return__ Current-Best-Learning(Rest(_examples_), _h_) \n", " __else if__ _e_ is a false positive for _h_ __then__ \n", "   __for each__ _h'_ __in__ specializations of _h_ consistent with _examples_ seen so far __do__ \n", "     _h''_ ← Current-Best-Learning(Rest(_examples_), _h'_) \n", "     __if__ _h''_ ≠ _fail_ __then return__ _h''_ \n", " __else if__ _e_ is a false negative for _h_ __then__ \n", "   __for each__ _h'_ __in__ generalizations of _h_ consistent with _examples_ seen so far __do__ \n", "     _h''_ ← Current-Best-Learning(Rest(_examples_), _h'_) \n", "     __if__ _h''_ ≠ _fail_ __then return__ _h''_ \n", " __return__ _fail_ \n", "\n", "---\n", "__Figure ??__ The current-best-hypothesis learning algorithm. It searches for a consistent hypothesis that fits all the examples and backtracks when no consistent specialization/generalization can be found. To start the algorithm, any hypothesis can be passed in; it will be specialized or generalized as needed." ], "text/plain": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pseudocode('Current-Best-Learning')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Implementation\n", "\n", "As mentioned earlier, examples are dictionaries (with keys being the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the *NOT* operation with an exclamation mark (!).\n", "\n", "We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples.\n", "\n", "You can read the source by running the cell below:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [ { "data": { "text/html": [ "\n", "\n", "\n", "\n", " \n", " \n", " \n", "\n", "\n", "

\n", "\n", "
def current_best_learning(examples, h, examples_so_far=None):\n",
       "    """ [Figure 19.2]\n",
       "    The hypothesis is a list of dictionaries, with each dictionary representing\n",
       "    a disjunction."""\n",
       "    if not examples:\n",
       "        return h\n",
       "\n",
       "    examples_so_far = examples_so_far or []\n",
       "    e = examples[0]\n",
       "    if is_consistent(e, h):\n",
       "        return current_best_learning(examples[1:], h, examples_so_far + [e])\n",
       "    elif false_positive(e, h):\n",
       "        for h2 in specializations(examples_so_far + [e], h):\n",
       "            h3 = current_best_learning(examples[1:], h2, examples_so_far + [e])\n",
       "            if h3 != 'FAIL':\n",
       "                return h3\n",
       "    elif false_negative(e, h):\n",
       "        for h2 in generalizations(examples_so_far + [e], h):\n",
       "            h3 = current_best_learning(examples[1:], h2, examples_so_far + [e])\n",
       "            if h3 != 'FAIL':\n",
       "                return h3\n",
       "\n",
       "    return 'FAIL'\n",
       "\n",
       "\n",
       "def specializations(examples_so_far, h):\n",
       "    """Specialize the hypothesis by adding AND operations to the disjunctions"""\n",
       "    hypotheses = []\n",
       "\n",
       "    for i, disj in enumerate(h):\n",
       "        for e in examples_so_far:\n",
       "            for k, v in e.items():\n",
       "                if k in disj or k == 'GOAL':\n",
       "                    continue\n",
       "\n",
       "                h2 = h[i].copy()\n",
       "                h2[k] = '!' + v\n",
       "                h3 = h.copy()\n",
       "                h3[i] = h2\n",
       "                if check_all_consistency(examples_so_far, h3):\n",
       "                    hypotheses.append(h3)\n",
       "\n",
       "    shuffle(hypotheses)\n",
       "    return hypotheses\n",
       "\n",
       "\n",
       "def generalizations(examples_so_far, h):\n",
       "    """Generalize the hypothesis. First delete operations\n",
       "    (including disjunctions) from the hypothesis. Then, add OR operations."""\n",
       "    hypotheses = []\n",
       "\n",
       "    # Delete disjunctions\n",
       "    disj_powerset = powerset(range(len(h)))\n",
       "    for disjs in disj_powerset:\n",
       "        h2 = h.copy()\n",
       "        for d in reversed(list(disjs)):\n",
       "            del h2[d]\n",
       "\n",
       "        if check_all_consistency(examples_so_far, h2):\n",
       "            hypotheses += h2\n",
       "\n",
       "    # Delete AND operations in disjunctions\n",
       "    for i, disj in enumerate(h):\n",
       "        a_powerset = powerset(disj.keys())\n",
       "        for attrs in a_powerset:\n",
       "            h2 = h[i].copy()\n",
       "            for a in attrs:\n",
       "                del h2[a]\n",
       "\n",
       "            if check_all_consistency(examples_so_far, [h2]):\n",
       "                h3 = h.copy()\n",
       "                h3[i] = h2.copy()\n",
       "                hypotheses += h3\n",
       "\n",
       "    # Add OR operations\n",
       "    if hypotheses == [] or hypotheses == [{}]:\n",
       "        hypotheses = add_or(examples_so_far, h)\n",
       "    else:\n",
       "        hypotheses.extend(add_or(examples_so_far, h))\n",
       "\n",
       "    shuffle(hypotheses)\n",
       "    return hypotheses\n",
       "
\n", "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "psource(current_best_learning, specializations, generalizations)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "You can view the auxiliary functions in the [knowledge module](https://github.com/aimacode/aima-python/blob/master/knowledge.py). A few notes on the functionality of some of the important methods:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* `specializations`: For each disjunction in the hypothesis, it adds a conjunction for values in the examples encountered so far (if the conjunction is consistent with all the examples). It returns a list of hypotheses.\n", "\n", "* `generalizations`: It adds to the list of hypotheses in three phases. First it deletes disjunctions, then it deletes conjunctions and finally it adds a disjunction.\n", "\n", "* `add_or`: Used by `generalizations` to add an *or operation* (a disjunction) to the hypothesis. Since the last example is the problematic one which wasn't consistent with the hypothesis, it will model the new disjunction to that example. It creates a disjunction for each combination of attributes in the example and returns the new hypotheses consistent with the negative examples encountered so far. We do not need to check the consistency of positive examples, since they are already consistent with at least one other disjunction in the hypotheses' set, so this new disjunction doesn't affect them. In other words, if the value of a positive example is negative under the disjunction, it doesn't matter since we know there exists a disjunction consistent with the example." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Since the algorithm stops searching the specializations/generalizations after the first consistent hypothesis is found, usually you will get different results each time you run the code." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Examples\n", "\n", "We will take a look at two examples. The first is a trivial one, while the second is a bit more complicated (you can also find it in the book).\n", "\n", "Earlier, we had the \"animals taking umbrellas\" example. Now we want to find a hypothesis to predict whether or not an animal will take an umbrella. The attributes are `Species`, `Rain` and `Coat`. The possible values are `[Cat, Dog]`, `[Yes, No]` and `[Yes, No]` respectively. Below we give seven examples (with `GOAL` we denote whether an animal will take an umbrella or not):" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "animals_umbrellas = [\n", " {'Species': 'Cat', 'Rain': 'Yes', 'Coat': 'No', 'GOAL': True},\n", " {'Species': 'Cat', 'Rain': 'Yes', 'Coat': 'Yes', 'GOAL': True},\n", " {'Species': 'Dog', 'Rain': 'Yes', 'Coat': 'Yes', 'GOAL': True},\n", " {'Species': 'Dog', 'Rain': 'Yes', 'Coat': 'No', 'GOAL': False},\n", " {'Species': 'Dog', 'Rain': 'No', 'Coat': 'No', 'GOAL': False},\n", " {'Species': 'Cat', 'Rain': 'No', 'Coat': 'No', 'GOAL': False},\n", " {'Species': 'Cat', 'Rain': 'No', 'Coat': 'Yes', 'GOAL': True}\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let our initial hypothesis be `[{'Species': 'Cat'}]`. That means every cat will be taking an umbrella. We can see that this is not true, but it doesn't matter since we will refine the hypothesis using the Current-Best algorithm. First, let's see how that initial hypothesis fares to have a point of reference." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n", "True\n", "False\n", "False\n", "False\n", "True\n", "True\n" ] } ], "source": [ "initial_h = [{'Species': 'Cat'}]\n", "\n", "for e in animals_umbrellas:\n", " print(guess_value(e, initial_h))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We got 5/7 correct. Not terribly bad, but we can do better. Lets now run the algorithm and see how that performs in comparison to our current result. " ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n", "True\n", "True\n", "False\n", "False\n", "False\n", "True\n" ] } ], "source": [ "h = current_best_learning(animals_umbrellas, initial_h)\n", "\n", "for e in animals_umbrellas:\n", " print(guess_value(e, h))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We got everything right! Let's print our hypothesis:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[{'Species': 'Cat', 'Rain': '!No'}, {'Species': 'Dog', 'Coat': 'Yes'}, {'Coat': 'Yes'}]\n" ] } ], "source": [ "print(h)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If an example meets any of the disjunctions in the list, it will be `True`, otherwise it will be `False`.\n", "\n", "Let's move on to a bigger example, the \"Restaurant\" example from the book. The attributes for each example are the following:\n", "\n", "* Alternative option (`Alt`)\n", "* Bar to hang out/wait (`Bar`)\n", "* Day is Friday (`Fri`)\n", "* Is hungry (`Hun`)\n", "* How much does it cost (`Price`, takes values in [$, $$, $$$])\n", "* How many patrons are there (`Pat`, takes values in [None, Some, Full])\n", "* Is raining (`Rain`)\n", "* Has made reservation (`Res`)\n", "* Type of restaurant (`Type`, takes values in [French, Thai, Burger, Italian])\n", "* Estimated waiting time (`Est`, takes values in [0-10, 10-30, 30-60, >60])\n", "\n", "We want to predict if someone will wait or not (Goal = WillWait). Below we show twelve examples found in the book." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "![restaurant](images/restaurant.png)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With the function `r_example` we will build the dictionary examples:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "def r_example(Alt, Bar, Fri, Hun, Pat, Price, Rain, Res, Type, Est, GOAL):\n", " return {'Alt': Alt, 'Bar': Bar, 'Fri': Fri, 'Hun': Hun, 'Pat': Pat,\n", " 'Price': Price, 'Rain': Rain, 'Res': Res, 'Type': Type, 'Est': Est,\n", " 'GOAL': GOAL}" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "In code:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "restaurant = [\n", " r_example('Yes', 'No', 'No', 'Yes', 'Some', '$$$', 'No', 'Yes', 'French', '0-10', True),\n", " r_example('Yes', 'No', 'No', 'Yes', 'Full', '$', 'No', 'No', 'Thai', '30-60', False),\n", " r_example('No', 'Yes', 'No', 'No', 'Some', '$', 'No', 'No', 'Burger', '0-10', True),\n", " r_example('Yes', 'No', 'Yes', 'Yes', 'Full', '$', 'Yes', 'No', 'Thai', '10-30', True),\n", " r_example('Yes', 'No', 'Yes', 'No', 'Full', '$$$', 'No', 'Yes', 'French', '>60', False),\n", " r_example('No', 'Yes', 'No', 'Yes', 'Some', '$$', 'Yes', 'Yes', 'Italian', '0-10', True),\n", " r_example('No', 'Yes', 'No', 'No', 'None', '$', 'Yes', 'No', 'Burger', '0-10', False),\n", " r_example('No', 'No', 'No', 'Yes', 'Some', '$$', 'Yes', 'Yes', 'Thai', '0-10', True),\n", " r_example('No', 'Yes', 'Yes', 'No', 'Full', '$', 'Yes', 'No', 'Burger', '>60', False),\n", " r_example('Yes', 'Yes', 'Yes', 'Yes', 'Full', '$$$', 'No', 'Yes', 'Italian', '10-30', False),\n", " r_example('No', 'No', 'No', 'No', 'None', '$', 'No', 'No', 'Thai', '0-10', False),\n", " r_example('Yes', 'Yes', 'Yes', 'Yes', 'Full', '$', 'No', 'No', 'Burger', '30-60', True)\n", "]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Say our initial hypothesis is that there should be an alternative option and lets run the algorithm." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n", "False\n", "True\n", "True\n", "False\n", "True\n", "False\n", "True\n", "False\n", "False\n", "False\n", "True\n" ] } ], "source": [ "initial_h = [{'Alt': 'Yes'}]\n", "h = current_best_learning(restaurant, initial_h)\n", "for e in restaurant:\n", " print(guess_value(e, h))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The predictions are correct. Let's see the hypothesis that accomplished that:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[{'Alt': 'Yes', 'Type': '!Thai', 'Hun': '!No', 'Bar': '!Yes'}, {'Alt': 'No', 'Fri': 'No', 'Pat': 'Some', 'Price': '$', 'Type': 'Burger', 'Est': '0-10'}, {'Rain': 'Yes', 'Res': 'No', 'Type': '!Burger'}, {'Alt': 'No', 'Bar': 'Yes', 'Hun': 'Yes', 'Pat': 'Some', 'Price': '$$', 'Rain': 'Yes', 'Res': 'Yes', 'Est': '0-10'}, {'Alt': 'No', 'Bar': 'No', 'Pat': 'Some', 'Price': '$$', 'Est': '0-10'}, {'Alt': 'Yes', 'Hun': 'Yes', 'Pat': 'Full', 'Price': '$', 'Res': 'No', 'Type': 'Burger', 'Est': '30-60'}]\n" ] } ], "source": [ "print(h)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It might be quite complicated, with many disjunctions if we are unlucky, but it will always be correct, as long as a correct hypothesis exists." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }