public:t-709-aies-2024:aies-2024:next-gen-ai-requirements
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
public:t-709-aies-2024:aies-2024:next-gen-ai-requirements [2024/09/15 08:49] – [Trustworthiness] thorisson | public:t-709-aies-2024:aies-2024:next-gen-ai-requirements [2024/09/15 09:14] (current) – thorisson | ||
---|---|---|---|
Line 6: | Line 6: | ||
====== REQUIREMENTS FOR NEXT-GEN AI ====== | ====== REQUIREMENTS FOR NEXT-GEN AI ====== | ||
+ | //Autonomy, Cause-Effect Knowledge, Cumulative Learning, Empirical Reasoning, Trustworthiness// | ||
\\ | \\ | ||
Line 66: | Line 67: | ||
| Explanation Depends on Causation | | Explanation Depends on Causation | ||
| \\ Bottom Line for \\ Human-Level AI | To grow and learn and self-inspect an AI must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | | \\ Bottom Line for \\ Human-Level AI | To grow and learn and self-inspect an AI must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does). | ||
+ | |||
+ | \\ | ||
+ | \\ | ||
+ | |||
+ | ==== Self-Explaining Systems ==== | ||
+ | |||
+ | | What It Is | The ability of a controller to explain, after the fact or before, why it did something or intends to do it. | | ||
+ | | ' | ||
+ | | Why It Matters | ||
+ | | Why It Matters \\ More Than You Think | The ' | ||
+ | | \\ Human-Level AI | Even more importantly, | ||
\\ | \\ |
/var/www/cadia.ru.is/wiki/data/attic/public/t-709-aies-2024/aies-2024/next-gen-ai-requirements.1726390161.txt.gz · Last modified: 2024/09/15 08:49 by thorisson