Both sides previous revisionPrevious revisionNext revision | Previous revision |
public:t_720_atai:atai-18:lecture_notes_w5 [2018/09/11 14:42] – [Laird et al.: Requirements for AGI] thorisson | public:t_720_atai:atai-18:lecture_notes_w5 [2024/04/29 13:33] (current) – external edit 127.0.0.1 |
---|
| Goal Diversity: \\ Breadth of goals | If a system X can meet a wider range of goals than system Y, system X is more //powerful// than system Y. | | | Goal Diversity: \\ Breadth of goals | If a system X can meet a wider range of goals than system Y, system X is more //powerful// than system Y. | |
| Generality | Any system X that exceeds system Y on one or more of the above we say it's more //general// than system Y, but typically pushing for increased generality means pushing on all of the above dimensions. | | | Generality | Any system X that exceeds system Y on one or more of the above we say it's more //general// than system Y, but typically pushing for increased generality means pushing on all of the above dimensions. | |
| General intelligence... | ...means less is needed to be known up front when the system is created, the system knows how to handle itself. | | | General intelligence... | ...means less is needed to be known up front when the system is created, the system can learn to figure things out and how to handle itself, in light of **LTE**. | |
| And yet: \\ The hallmark of an AGI | A system that can handle novel or **brand-new** //open problems//. The level of difficulty of the problems it solves would indicate its generality. | | | And yet: \\ The hallmark of an AGI | A system that can handle novel or **brand-new** //open problems//. The level of difficulty of the problems it solves would indicate its generality. | |
| |
| No Certainty | R8. The system must be able to handle incompleteness, uncertainty, and inconsistency, both in state space and in time. | In any large world there will be unintended and unforeseen consequences to all changes, as well as potential errors in measurements (perception). Certainty can never be 1. \\ In other words, "Nothing is 100% (not even this axiom!)." | | | No Certainty | R8. The system must be able to handle incompleteness, uncertainty, and inconsistency, both in state space and in time. | In any large world there will be unintended and unforeseen consequences to all changes, as well as potential errors in measurements (perception). Certainty can never be 1. \\ In other words, "Nothing is 100% (not even this axiom!)." | |
| Abstractions | R9. The system must be able to generate abstractions from learned knowledge. | Abstractions are a kind of compression that allows more efficient management of small details, causal chains, etc. Abstraction is fundamental to induction (generalization) and analogies, two cognitive skills of critical importance in human intelligence. | | | Abstractions | R9. The system must be able to generate abstractions from learned knowledge. | Abstractions are a kind of compression that allows more efficient management of small details, causal chains, etc. Abstraction is fundamental to induction (generalization) and analogies, two cognitive skills of critical importance in human intelligence. | |
| | Reasoning | R10. The system must be able to use applied logic - reasoning - to generate, manipulate, and use its knowledge. | Reasoning in humans is not the same as reasoning in formal logics; it is non-axiomatic and is always performed under uncertainty (per R8). | |
| |
\\ | \\ |