User Tools

Site Tools


public:t-622-arti-14-1:lab_5_-_bayesian_networks

Lab 5: Bayesian Networks

In this session we will look at some basic Bayesian Networks and you will solve two problems using a Bayesian Network simulator. You can work together in groups. Hand in the results in MySchool.

Material

Here are two different Bayesian Network simulations that you can use:

Short explanation on how to use the first tool

The application has two tabs “Create” and “Solve”. In “Create” you can create and modify a Bayesian network, in “Solve” you can compute arbitrary probabilities using the network. Suppose you have a network with nodes A and B, where B depends on A. To create the network, you need to:

  • create two nodes and give them the names A and B;
  • create an arc pointing from A to B;
  • modify the probability tables of both nodes and set the probabilities <latex>$P(A)$</latex>, <latex>$P(B|A)$</latex>, <latex>$P(B|\neg A)$</latex>.

Suppose you want to compute <latex>$P(\neg A|\neg B)$</latex>, that is, you want to know the probability for A being false after having observed B being false. Then you need to:

  • Go to the “Solve” tab;
  • click on “Make Observation”, then on node B and select “F” for false;
  • click on “Query” and then on node A (select Brief query mode if asked).
  • You should now see the probabilities for <latex>$P(A)$</latex> and <latex>$P(\neg A)$</latex> under the condition that B = false.

Laws of Probabilities

The following laws of probabilities might be helpful for developing formulas:

  • <latex>$P(\neg A) = 1 - P(A)$</latex>
  • Bayes Law: <latex>$P(A \land B) = P(A|B) * P(B)$</latex>
  • Marginalization/Summing out: <latex>$P(A) = \sum_b P(A \land b) = P(A \land B) + P(A \land \neg B)$</latex>
  • Independence 1: <latex>$P(A|B,C) = P(A|C)$</latex>, if <latex>$A$</latex> and <latex>$B$</latex> are independent given <latex>$C$</latex>.
  • Independence 2: <latex>$P(A \land B|C) = P(A|C) * P(B|C)$</latex>, if <latex>$A$</latex> and <latex>$B$</latex> are independent given <latex>$C$</latex>. (This follows from Independence 1 and Bayes Law.)

All those laws allow arbitrary additional conditions as long as all probabilities have the condition:

  • Bayes Law: <latex>$P(A \land B|C) = P(A|B,C) * P(B|C)$</latex>
  • Marginalization/Summing out: <latex>$P(A|C) = \sum_b P(A \land b|C) = P(A \land B|C) + P(A \land \neg B|C)$</latex>

Problem 1: Smelly Doors

You are writing a program to control a non-player character (NPC) in a game. The NPC is in a building full of doors. Behind each door, there could be either a reward (e.g. health-points) or a monster which the NPC must fight with (losing health-points). The room can also be empty. Once the NPC opens a door, he must fight the monster behind it if any. However, before opening a door the NPC can stick its nose in the keyhole (it cannot look through it) and smell the air inside the room. The air will smell either bad or not. In summary:

  • The NPC should seek reward but avoid monsters;
  • The NPC doesn't know what's behind a door in advance, but…
  • it can check whether the room smells bad or not and use that information as an indicator.
  1. Design a very simple Bayesian network (2 nodes) usable by the NPC to decide when in front of a door, whether he should open it or not. Start by inventing reasonable probabilities for the relation between the contents of the room and its smell.
  2. How probable is it that the room contains a monster if the air smells bad?
  3. Argue how, instead of using made-up probabilities, the NPC can learn as he opens doors and dynamically update the Bayesian network becoming smarter.

Hand in:

  1. the Bayesian network (e.g., a screenshot and the probability tables for all nodes of the network)
  2. answer to question 2 above
  3. a few sentences on how the NPC could learn the necessary probabilities

Problem 2: The Pirate Treasure

You are a seasoned tomb raider and have spent the last week rummaging through an old pirate cove full of treasure. So far you have opened 100 chests and of those, 50 have in fact contained treasure! Out of these 50, 40 were trapped and you sustained some painful damage from opening them. Out of these 40 trapped chests, 28 were also locked. Now, of the 10 untrapped chests, three were locked. One would think that only chests with treasure would be trapped, but these pirates were truly nasty, they also put traps on chests with no treasure. Of the 50 chests containing no treasure, 20 were trapped! You forgot how many of the chests without treasure were locked, but you believe that the ratios were similar to the ones with treasure.

You have now discovered a new chest that you haven't seen before. When you take a careful look, you notice that it is locked. What is the chance that this chest will contain treasure? What is the chance that it will be trapped? You are not feeling so good after all the previous traps, so will it be worth opening this chest if your life is on the line?

Construct a Bayesian Network to answer these questions and discuss what you would do.

Develop a formula for computing the probability of a chest containing treasure, given that you observe whether it is locked or not. The formula must only use probabilities that are given in the Bayesian Network. (Hint: use Bayes' Law from the lecture) Check whether your formula (or the Bayesian Network) is correct by comparing the values you compute with the formula with those that you get from the simulator.

Hand in:

  1. the Bayesian network (e.g., a screenshot and the probability tables for all nodes of the network)
  2. the answers to the three questions in the text
  3. the formula
/var/www/cadia.ru.is/wiki/data/pages/public/t-622-arti-14-1/lab_5_-_bayesian_networks.txt · Last modified: 2024/04/29 13:33 by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki