User Tools

Site Tools


public:t-720-atai:atai-22:understanding

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-720-atai:atai-22:understanding [2022/10/18 12:47] – [It Used To Be Called Common Sense (in AI circles)] thorissonpublic:t-720-atai:atai-22:understanding [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 55: Line 55:
 |  Aaron  | http://prostheticknowledge.tumblr.com/post/20734326468/aaron-the-first-artificial-intelligence-creative  \\ https://www.youtube.com/watch?v=3PA-XApZkso   | |  Aaron  | http://prostheticknowledge.tumblr.com/post/20734326468/aaron-the-first-artificial-intelligence-creative  \\ https://www.youtube.com/watch?v=3PA-XApZkso   |
 |  \\ Thaler's \\ Creativity Machine  | http://www.imagination-engines.com \\ CM patented in 1994 \\ A few years later: the CM makes an invention that gets a patent from the United States Patent & Trademark Office (USPTO) \\ [[http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5659666|CM patent]] \\ What it is: ANN, becomes "creative" by "relaxing some parameters" so that the ANN "begins to hallucinate"  | |  \\ Thaler's \\ Creativity Machine  | http://www.imagination-engines.com \\ CM patented in 1994 \\ A few years later: the CM makes an invention that gets a patent from the United States Patent & Trademark Office (USPTO) \\ [[http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5659666|CM patent]] \\ What it is: ANN, becomes "creative" by "relaxing some parameters" so that the ANN "begins to hallucinate"  |
-|  \\ Are these machines creative \\ ?  | Maybe - in some sense of the concept. \\ Are they //truly// creative? Probably not. \\ How so? What does it mean to be "truly creative"? \\ That would be the **full monty**: The ability to see unique, valuable solutions to a wide range of challenging problems better than others.    |+|  Are these machines creative \\ ?  | Maybe - in some sense of the concept. \\ Are they //truly// creative? Probably not. \\ How so? What does it mean to be "truly creative"? \\ That would be the **full monty**: The ability to see unique, valuable solutions to a wide range of challenging problems better than others.    |
 |  Creativity is \\ a Relative Term  | It is somewhat unavoidable to interpret the concept of 'creativity' as a relative term - i.e. "person (or system) X is more creative than person (or system) Y", as no absolute scale for it exists, as of yet. (It is possible that AI / AGI research may one day develop such a scale.)   | |  Creativity is \\ a Relative Term  | It is somewhat unavoidable to interpret the concept of 'creativity' as a relative term - i.e. "person (or system) X is more creative than person (or system) Y", as no absolute scale for it exists, as of yet. (It is possible that AI / AGI research may one day develop such a scale.)   |
 |  Intermediate Conclusion  | To answer the question "Do creative machines exist?" we must inspect the concept of creativity in more detail.    | |  Intermediate Conclusion  | To answer the question "Do creative machines exist?" we must inspect the concept of creativity in more detail.    |
Line 134: Line 134:
  
  
-==== Explanation & Explainability ====+==== Self-Explaining Systems ====
  
 |  What It Is  | The ability of a controller to explain, after the fact or before, why it did something or intends to do it.   | |  What It Is  | The ability of a controller to explain, after the fact or before, why it did something or intends to do it.   |
-|  Why It Is Important  | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of it did what it did, whether it had any other choice, or even how likely it is to do it again.     |+|  'Explainability' \\ ≠ \\ 'self-explanation'  | If an intelligence X can explain a phenomenon Y, Y is 'explainable' by Y, through some process chosen by Y. \\ \\ In contrast, if an intelligence X can explain itself, its own actions, knowledge, understanding, beliefs, and reasoning, it is capable of self-explanation. The latter is stronger and subsumes the former.   | 
 +|  Why It Is Important  | If a controller does something we don't want it to repeat - e.g. crash an airplane full of people (in simulation mode, hopefully!) - it needs to be able to explain why it did what it did. If it can't, it means it - and //we// - can never be sure of why it did what it did, whether it had any other choice, whether it is likely to do it again, whether it's an evil machine that actually meant to do it, or even how likely it is to do it again.     |
 |  \\ Human-Level AI  | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does).   | |  \\ Human-Level AI  | Even more importantly, to grow and learn and self-inspect the AI system must be able to sort out causal chains. If it can't it will not only be incapable of explaining to others why it is like it is, it will be incapable of explaining to itself why things are the way they are, and thus, it will be incapable of sorting out whether something it did is better for its own growth than something else. Explanation is the big black hole of ANNs: In principle ANNs are black boxes, and thus they are in principle unexplainable - whether to themselves or others. \\ One way to address this is by encapsulating knowledge as hierarchical models that are built up over time, and can be de-constructed at any time (like AERA does).   |
  
/var/www/cadia.ru.is/wiki/data/attic/public/t-720-atai/atai-22/understanding.1666097255.txt.gz · Last modified: 2024/04/29 13:32 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki