User Tools

Site Tools


public:t_720_atai:atai-18:constructivistai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t_720_atai:atai-18:constructivistai [2020/08/16 11:48] – [Constructivist AI: Systems That Do More] thorissonpublic:t_720_atai:atai-18:constructivistai [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 16: Line 16:
  
 <m>A</m>: An agent. \\ <m>A</m>: An agent. \\
-<m>C_A</m>: A cognitive controller (architectureof agent <m>A</m>. \\ +<m>Psi_A</m>: A cognitive architecture of agent <m>A</m>. \\ 
-<m>P_A</m>: A perception process of agent <m>A</m> produces //percepts//. \\+<m>P_A</m>: A perception process of agent <m>A</m>. \\
 <m>#</m>: A pattern, at any level of detail.  \\ <m>#</m>: A pattern, at any level of detail.  \\
 Given a pattern <m>'&#35;'</m> and a perception <m>P</m> process of agent <m>A</m>, then \\ Given a pattern <m>'&#35;'</m> and a perception <m>P</m> process of agent <m>A</m>, then \\
 <m>P_A</m><m>('&#35;'_n)</m> \\ <m>P_A</m><m>('&#35;'_n)</m> \\
-is the //percept// produced by <m>A</m>'s <m>P</m>  from pattern <m>'&#35;'_n</m>. \\+is the perception by agent <m>A</m> of pattern <m>'&#35;'_n</m>. \\
 <m>C_A</m>: A set of cognitive process of an agent <m>A</m>; <m>P_{A}subset{C_A}</m>. \\ <m>C_A</m>: A set of cognitive process of an agent <m>A</m>; <m>P_{A}subset{C_A}</m>. \\
-<m>G_A</m>: A set of top-level goals of agent <m>A</m> \\ +<m>d_A</m>: A decision mechanism of agent <m>A</m>; <m>{C_Aright d_{A}</m> \\ 
-<m>g_A</m>: A subgoal  of agent <m>A</m>; <m>{g_Asubset G_{A}</m> \\ +<m>B_A</m>: A set of actions or behaviors of agent <m>A</m><m>C_{A} right {B_A}</m>.
-<m>B_A</m>: A set of actions or behaviors of agent <m>A</m> resulting from committing to one or more <m>g_A</m>+
  
 ====== ====== ====== ======
Line 82: Line 81:
 Our quest for systems that are more capable than those of yesteryear has brought us to the point of considering how such systems – that can adapt to a wide variety of contexts and learn a wide variety of tasks – should be architected. Since we cannot architect them by hand (we don't know what the wide variety of context and tasks may entail) we must impart some meta-principles to these systems, ways of having them "figure things out for themselves". Since current methods of software development are not up to the task, how does the desire for such systems affect our toolset? What new tools and methods do we need?  Our quest for systems that are more capable than those of yesteryear has brought us to the point of considering how such systems – that can adapt to a wide variety of contexts and learn a wide variety of tasks – should be architected. Since we cannot architect them by hand (we don't know what the wide variety of context and tasks may entail) we must impart some meta-principles to these systems, ways of having them "figure things out for themselves". Since current methods of software development are not up to the task, how does the desire for such systems affect our toolset? What new tools and methods do we need? 
  
-If a system <m>A</m> increases autonomously the set of patterns <m>#</m> that it can recognize, and the set of states <m>S_o</m> that it can use as output to control the effects of the environment <m>E</m> on itself with respect to these patterns, the system is said to be //growing// its intelligence. Creation of models that describe <m>A</m>'s possible percepts <m>P</m>, without increasing the potential of <m>A</m> to control <m>E</m>, is growth of the //knowledge// of <m>A</m>, which is a subset and prerequisite of intelligence. Knowledge + available behavior to control the environment for the purposes of achieving <m>A</m>'s goals, is intelligence. +If a system <m>A</m> increases autonomously the set of patterns <m>#</m> that it can recognize, and the set of states <m>S_o</m> that it can use as output to control the effects of the environment <m>E</m> on itself with respect to these patterns, the system is said to be //growing// its intelligence. Creation of models that describe <m>A</m>'s possible perceptions <m>P</m>, without increasing the potential of <m>A</m> to control <m>E</m>, is growth of the //knowledge// of <m>A</m>, which is a subset and prerequisite of intelligence. Knowledge + available behavior to control the environment for the purposes of achieving <m>A</m>'s goals, is intelligence. 
  
-The distinction between //knowing what// and //knowing how// highlights the importance of unification of the two in producing intelligence: It is conceivable that a human will know a lot about a certain phenomenon <m>phi</m> without knowing much or anything about how to do something about or with <m>phi</m>; for that to happen the human must have //actionable// knowledge -- an ability to to turn its knowledge of the various aspects of <m>phi</m> into actions that change or affect aspects of <m>phi</m>+The distinction between //knowing what// and //knowing how// highlights the importance of unification of the two in producing intelligence: It is conceivable that a human will know a lot about a certain phenomenon <m>phi</m> without knowing much or anything about how to do about <m>phi</m>; for that to happen the human must have actionable knowledge -- an ability to to turn its understanding of <m>phi</m> into actions that change or affect aspects of <m>phi</m>
  
-So far, intelligence in this new formulation is thus the ability of a system to autonomously increase its own ability to control states of its environment, to achieve its goals. But we need more than that, the system must be able to generate its own subgoals autonomously. +So far, intelligence in this new formulation is thus the ability of a system to autonomously increase its own ability to control states of its environment, to achieve its goals. But we need more than that, the system must be able to generate subgoals autonomously. 
  
 Any system capable of cognitive growth must be capable of some sort of self-evaluation, otherwise it will not be able to decide whether certain milestones in its growth are being reached, or whether changes made in light of experience have been for the better. The self evaluation must in fact be of a particularly powerful kind, compared to most constructionist approaches to such evaluation that we could cook up, because large parts of the system's knowledge, as well as the architecturo-cognitive mechanisms that produced them, must be able to serve as the subject of such an evaluation. In its most extreme case the whole architecture <m>Psi</m> evaluates its present state in light of past state(s):  Any system capable of cognitive growth must be capable of some sort of self-evaluation, otherwise it will not be able to decide whether certain milestones in its growth are being reached, or whether changes made in light of experience have been for the better. The self evaluation must in fact be of a particularly powerful kind, compared to most constructionist approaches to such evaluation that we could cook up, because large parts of the system's knowledge, as well as the architecturo-cognitive mechanisms that produced them, must be able to serve as the subject of such an evaluation. In its most extreme case the whole architecture <m>Psi</m> evaluates its present state in light of past state(s): 
Line 104: Line 103:
 \\ \\
  
-2020(c)K. R. Thórisson  +2018(c)K. R. Thórisson  
  
 //EOF// //EOF//
/var/www/cadia.ru.is/wiki/data/attic/public/t_720_atai/atai-18/constructivistai.1597578537.txt.gz · Last modified: 2024/04/29 13:33 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki