Thought Questions
Please note that the successful submission of all Thought Question assignments during the course is a prerequisite for taking the final test and thus a prerequisite for finishing the course.
In each lecture where there is new reading material, each student must create one “thought question” about the material. This question must be related to the reading and be thought-provoking. The questions handed in must also come with some proposed answers. The more answers the better. If you manage to create a question regarding the topic you receive one point for the question. If it is thought-provoking or one can see that you spent some thought on it, you receive 2 points. So there are 2 points available each time you are supposed to hand in thought questions, giving you three possible grades: 0, 5 and 10.
You should also show up in class with your thought questions as they will be reviewed and discussed (given that there is time). Each time a student will be selected randomly and asked to read one or more of his/her questions, to spark further discussion in class.
How to hand in: Every time there is some new reading material a project will appear in MySchool where you can hand your thought questions in.
Examples
Chapter 2
Let's take a look at some example thought questions from chapter 2 (you will NOT hand in thought questions for chapter 2).
Examples of good thought questions:
Q: Should the evaluation element of a learning agent be static or dynamic, meaning that it could change over time?
A: It should be static so that all the experience that has been gathered doesn't become invalid.
A: It should be dynamic because if the environment is dynamic, the critic should be able to adjust to it
A: It should only change if its goals change.
Q: How can learning improve the types of agents mentioned in the chapter.
A: Simple reflex agents could learn new if-then rules or change its own. Model-based agents could use learning to update their models. Goal-based agents could use learning to improve the plan search. Utility-based agents can use learning to improve its utility function
Q: Is it good for the programmer to know the environment which his/her agent will operate within?
A: Yes, because it will create a better agent.
A: No, because there is a danger of the agent's design being too dependent on the environment and its attributes.
Examples of Bad Thought Questions:
Q: What is an agent?
A: An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.
–This question is bad because it's too obvious.
Q: What types of agents are there:
A: Simple reflex, model-based, goal-based and utility-based.
–This question is bad because it is difficult to see that s/he who wrote it spent any time thinking about the topic.
Chapter 24
From Bjarni Þór Árnason:
Q: Imagine a robot with vision, that processes all of its captured data into a 3d world. How can the robot make an educated guess about the back side (side that he doesn't see) of these objects? Humans can do that pretty easily.
A: One possibly solution is to invert the object, and assume it's symmetric (in 3d.. is that possible?) A: An other solution is to use shadows and other objects as a hint. For example, my mousepad is sitting on my desk, and therefore it casts almost no shadow. Therefore it most likely has a solid back that has the same shape as the top side that we see. But it's possible that it's not and is “hollow” inside.
Q: How can we improve our method of recognizing objects? For example, recognizing a car to be a car, not 4 wheels, an aluminum body, seats, etc.
A: Motion blur (even though it's probably considered noise) could possibly help here. But only if the object is moving. We can see that if the whole object moves at the same speed in the same direction, then it's a pretty safe bet that it's one whole. An obvious other way (that is most likely how it's solved today) is to take more pictures and compare them picture to picture, but then you have the issue of matching things, that have possibly rotated, turned, moved, etc.
Q: The computer vision chapter sets fourth a set of difficult problems with computer vision, such as seperating foreground from background, seperating shadows from physical things, estimating depth, etc. If we were to think outside of the box, how could we possibly solve these problems?
A: As shown in the reading material, image filters such as the SUSAN edge detection filters can be used to some degree. However, I'd like to consider an other possibility… Using multiple flashes(camera flashes) to create more visible shadows, and elimnating shadows at the same time. Imagine one flash mounted 1 meter to the right of the camera and other 1 meter to the left(possibly more), and there is an object in front of the camera. The camera would then take 2 pictures, 1 with each flash, and process them for differences. Using the intersections and unions of the shadows (and the lack of shadows) you should be able to construct an outline of the object. Obviously this method has it's downsides, but it's something that I found worth considering. This would also give us the advantage of being able to seperate shadows from physical things, as well as seperating foreground from background (flashes have limited range. image could be compared to one taken without flash).
—
From Brynjar Reynisson:
Q: The perception chapter describes various algorithms and mathematical transformations for deciphering visual data. In nature however, all higher level visual processing happens through multilayered neural networks (at least that's what we think). What are some of the pros and cons of trying to go that way for turning raw visual data into categorized information for said data?
Pros: Obviously humans and many animals are very good at dealing with visual changes in the environment (due to such things as lighting and egomotion), also at recognizing things that are only faintly similar as belonging to the same group of things. How people and animals perceive the world, evolves as the being grows older so learning is continuous. Our traditional way of dealing with neural networks has been to first train them for a period and then use what they're capable of after that. Learning more about how this works in nature regarding vision might help with other even more difficult cognitive areas.
Cons: We don't know how to do it - what are the functions, what is their relationship, how do you train each part? In addition, the parts we're able to do are sort of a black box, i.e. we can't fully follow the reasons for why/how a neural network makes its choices - we just have a general idea but can't say: because of A→B, and because of B→C. To have this working somewhat like in a complex evolved organism, we will need massive cpu power and extremely efficient concurrency - on the other hand, both are only getting better.
Q: Watching a football game with machine vision on my mind (neurons?), I wonder what it would take to have an agent be able to correctly identity at least some elements that happen in a game? E.g. what team has the ball or is passing it, is the ball in play and if it's out of the field who touched it last (to infer who gets a throw-in or corner kick), to see if the ball crosses the goal line etc.
A: Many aspects of the game seem to be quite easily segmented. If you start with edge detection, you will get the moving players (and referee), the ball, as well as the static painted lines and the goal posts. Following that you could do some sort of coloring analysis for the players to see which team they belong to, or if it's the referee we're talking about. Traditionally, facial recognition is quite difficult, but should be much easier once you've narrowed it down to only 10 possible faces (goalkeeper excluded as he/she has a different outfit). Then you'd have some sort of collision detection to see who touched the ball last, and if the ball has crossed the field's boundary (either offside or in the goal). Depth perception can be tricky, but the static nature of the background (painted lines, goal posts, observer areas) should make the job easier. More difficult jobs include detecting what body parts a player uses to move the ball (everything but the hands is fair game, except if you're talking about the goalkeeper). Another tricky one would be to detect how legal player collision is - but that one is also very tricky for human referees…
Remember:
- Give answers to your questions. We cannot give a full mark for questions without answers.
- Give more than one answer. This is a good way to show the quality of your thinking, as you need to look at the problem from more than one side.
- The questions don't need to have complete answers. If you can't really answer your question you might be on the right track :)
- Try imagining that you are sitting in a café with some of your peers from the class. How would you pose an interesting question from the text you just read.
- The questions don't need be straight from the text. Maybe you could form the question around what you could do with the technology discussed in the text.
- Someone told us there is this thing called the World Wide Web that has some amazing amounts of information, free for all.