Final Exam T-720-ATAI-2018
The final exam will be a 3-hour closed-book exam.
Questions will focus on your understanding of the material that has been presented in the course and can be found in the assigned readings, with an emphasis on the main concepts and topics and your ability to holistically comprehend these.
If you are unsure of which topics are the main ones, and which are sub-topics or less important, look at the lecture notes: If it's mentioned there it is important. If it's mentioned more than once it's even more important. But above all, your comprehension of the relationships between topics, and your ability to put this in context with the present state and future of AI, is what this test focuses on.
Below are some example questions. Please note that this is not an exhaustive list of the types of questions that may appear on the final exam, it is representative of some questions that may appear on the final exam, and are provided to help you prepare for the final exam. These are examples only, the specific question below may (or may not) appear - in this or a modified form - on your final exam.
Example questions
Example Question 1
1. For a task of space junk cleanup the European Space Agency (ESA) is planning to build an autonomous robotic satellite craft that can obliterate orbiting space junk with a powerful laser. Each encounter of the craft with a piece of junk must be documented through a camera and uploaded to a computer as soon as it is finished. The laser must be tuned based on the junk's weight, density, and speed of motion relative to the trash satellite for each encounter. For most of the junk that the craft will encounter there exists no records of size, weight, speed, or other information. Everything onboard the craft, including its small electric-power thrusters, is powered by its batteries, which are charged via its solar panels. The craft will stay in orbit for at least twenty years.
1a. ESA is unsure what kind of control system to use for the craft. They want you to write a report with an analysis of the requirements of the control system and recommendations for how it should be designed. They are especially interested in novel ideas and approaches. (i) What will your report say about the control system requirements? (ii) What will your recommendation look like? (Make sure you present strong and clear arguments for your answer.)
1b. A few months later ESA adds the following issue: They have determined that the is so much space junk that the craft will likely have to fire at multiple objects simultaneously. This puts extra strain on the scheduling of events, including charging batteries, keeping track of junk items, making sure the craft is not hit by debris, among other things. The company wants to develop a system based on the AGI-aspiring NARS system to control the photography unit. They want you to propose a way for using either of these for this purpose. Please give an outline of your suggested first steps in how to use NARS for this purpose. Make sure you list the strengths and weaknesses of NARS for this purpose.
Example Question 2
What is cognitive growth? Give some examples of cognitive growth. Why is cognitive growth an issue in constructivist AI but not in “good old fashioned AI” (GOFAI) ? [10%]
Example Question 3
Why is architecture important in constructivist AI? [5%]
Example Question 4
Describe a constructionist architecture. Explain why it falls short of providing a platform for developing constructivist AI systems. Use illustrations as appropriate. [15%]
Example Question 5
Explain the perception-action loop. Why is it of importance in constructivist AI? Name two other loops that can be found in constructivist AI systems. [10%]
Example Question 6
Explain why causal knowledge is important for an AGI and how it may help a system learn.
Example Question 7
Given the following experience,
<dog --> mammal>. %1.0; 0.9% <cat --> mammal>. %0.95; 0.9% <mammal --> vertebrate>. %1.0; 0.8%
using NARS' high-level inference rules, give the truth values for the
following queries and state which inference rule was used in the derivation.
Assume the evidential horizon k=1.
(a) <dog --> cat>? (b) <dog --> vertebrate>? (c) <vertebrate --> cat>?
EOF