[[http://cadia.ru.is/wiki/public:t-720-atai:atai-16:main|T-720-ATAI-2016 Main]] =====T-720-ATAI-2016===== ====Lecture Notes, F-12 08.03.2016==== \\ \\ \\ \\ ==== Self-Programming ==== | What it is | //Self-programming// here means, with respect to some virtual machine M, the production of one or more programs created by M itself, whose //principles// for creation were provided to M at design time, but whose details were //decided by M// at runtime based on its //experience//. | | Self-Generated Program | Determined by some factors in the interaction between the system and its environment. | | Historical note | Concept of self-programming is old (J. von Neumann one of the first to talk about self-replication in machines). However, few if any proposals for how to achieve this has been fielded. [[https://en.wikipedia.org/wiki/Von_Neumann_universal_constructor|Von Neumann's universal constructor on Wikipedia]] | | No guarantee | The fact that a system has the ability to program itself is not a guarantee that it is in a better position than a traditional system. In fact, it is in a worse situation because in this case there are more ways in which its performance can go wrong. | | Why we need it | The inherent limitations of hand-coding methods make traditional manual programming approaches unlikely to reach a level of a human-grade generally intelligent system, simply because to be able to adapt to a wide range of tasks, situations, and domains, a system must be able to modify itself in more fundamental ways than a traditional software system is capable of. | | Remedy | Sufficiently powerful principles are needed to insure against the system going rogue. | | The Self of a machine | **C1:** The processes that act on the world and the self (via senctors) evaluate the structure and execution of code in the system and, respectively, synthesize new code. \\ **C2:** The models that describe the processes in C1, entities and phenomena in the world -- including the self in the world -- and processes in the self. Goals contextualize models and they also belong to C2. \\ **C3:** The states of the self and of the world -- past, present and anticipated -- including the inputs/outputs of the machine. | | Bootstrap code | A.k.a. the "seed". Bootstrap code may consist of ontologies, states, models, internal drives, exemplary behaviors and programming skills. | \\ \\ ==== Programming for Self-Programming ==== | Can we use LISP? | Any language with similar features as LISP (e.g. Haskel, Prolog, etc.), i.e. the ability to inspect itself, turn data into code and code into data, should //in theory// be capable of sustaining a self-programming machine. | | Theory vs. practice | "In theory" is most of the time //not good enough// if we want to see something soon (as in the next decade or two), and this is the case here too; what is good for a human programmer is not so good for a system having to synthesize its own code in real-time. | | Why? | Building a machine that can write (sensible, meaningful!) programs means that machine is smart enough to understand the code it produces. If the purpose of its programming is to //become//smart, and the programming language we give to it //assumes it's smart already//, we have defeated the purpose of creating the self-programming machine in the first place. | | What can we do? | We must create a programming language with //simple enough// semantics so that a simple machine (perhaps with some clever emergent properties) can use it to bootstrap itself in learning to write programs. | | Does such a language exist? | Yes. It's called [[http://xenia.media.mit.edu/~kris/ftp/nivel_thorisson_replicode_AGI13.pdf|Replicode]]. | \\ \\ ==== Self-Modeling ==== | What it is | A self-modeling system contains a model of itself. | | What it is good for | A system with a model of self can use this model to predict and explain its own actions, for improvement, analysis, or other purposes. | | Key Principle | A system continuously modeling its own operation has to do so at multiple levels of abstraction, from the program rewriting up to the level of global processes (e.g. the utility function), thus turning eventually into a fully self-modeling system. | \\ \\ ==== Anatomy of a Self-Programming System ==== | I/O Devices | Separate the ineterface/"API of the world" from the controller. | | Controller = logic+models | Models control the (re)programming. Self-programming must be methodical. Typically self-programming is controlled by a Drive in the Seed. | | Required: Goal | Self-programming has to perform in light of goal achievement. | \\ \\ ==== Autonomy & Closure ==== | Autonomy | The ability to do tasks without interference / help from others. | | Cognitive Autonomy | Refers to the mental (control-) independence of agents - the more independent they are (of their designers, of outside aid, etc.) the more autonomous they are. Systems without it could hardly be considered to have general intelligence. | | Structural Autonomy | Refers to the process through which cognitive autonomy is achieved: Motivations, goals and behaviors as dynamically and continuously (re)constructed by the machine as a result of changes in its internal structure. | | Operational closure | The system's own operations is all that is required to maintain (and improve) the system itself. | | Semantic closure | The system's own operations and experience produces/defines the meaning of its constituents. //Meaning// can thus be seen as being defined/given by the operation of the system as a whole: the actions it has taking, is taking, could be taking, and has thought about (simulated) taking, both cognitive actions and external actions in its physical domain. For instance, the meaning of punching your best friend are the implications - actual and potential - that this action has/may have, and its impact on your own cognition. | | Self-Programming in Autonomy | The global process that animates computational structurally autonomous systems, i.e. the implementation of both the operational and semantic closures. | | System evolution | A controlled and planned reflective process; a global and never-terminating process of architectural synthesis. | \\ \\ ==== Integrated Cognitive Control ==== | What it is | The ability of a controller / cognitive system to steer its own structural development - architectural growth (cognitive growth). The (sub-) system responsible for meta-learning. | | Cognitive Growth | The structural change resulting from learning in a structurally autonomous cognitive system - the target of which is self-improvement. | \\ \\ ==== Autonomy ==== | {{public:t-720-atai:autonomy-dimensions1.png}} | | “Autonomy comparison framework focusing on mental capabilities. Embodiment is not part of the present framework, but is included here for contextual completeness.” [[http://xenia.media.mit.edu/~kris/ftp/AutonomyCogArchReview-ThorissonHelgason-JAGI-2012.pdf|source]] | \\ \\ ==== Self-Awareness & Consciousness==== | Disclaimer | Are we talking about phenomenological experience here, that is, what it's //like to be a perceiving thinking being?// The short answer is "no": There is no need for the concept of phenomenological experience here, we can make use of standard machine-like control mechanisms. Nevertheless, taking the information-centric view, there is some amount of information that is needed for a system to be called "self-aware". It is this information that we discuss here. | | Self-awareness | Self-awareness requires a conception of //self//. A self is not a complex concept -- it can be represented by a battery of models, just like anything else. Sure, the self is //special// in that it is a //prerequisite// for anything else happening, and thus protecting it is of utmost importance to an intelligent agent. | | Components | What are the parts of a self? It can be thought of as an onion: the innermost core is the internal processes of the controller, that which is absolutely //necessary// for there to be any goal-directed activity in the controller; the next ring out contains acquired knowledge and other such elements which may be valuable but not indispensable; the third ring out might be the body of the agent (the parts that are not part of the innermost core). | | A-brain / B-brain | With a separation between the domain system and the cognitive control system, referred to as A-brain and B-brain, respectively, we can see how a single general-purpose system can be replicated to control itself in a way that implements integrated cognitive control. Of course, each system may have a different seed (we'll talk about seeds in the next lecture). | \\ \\ ====Levels of Self-Programming==== | Level 1 | Level one self-programming capability is the ability of a system to make programs that exclusively make use of its primitive actions from action set. | | Level 2 | Level two self-programming systems can do Level 1, and additionally generate new primitives. | | Level 3 | Level three self-programming adds the ability to change the principles by which Level one and Level two operate, in other words, Level three self-programming systems are capable of what we would here call meta-programming. This would involve changing or replacing some or all of the programs provided to the system at design time. Of course, the generations of primitives and the changes of principles are also controlled by some programs. | | Infinite regress? | Though the process of self-programming can be carried out in more than one level, eventually the regress will stop at a certain level. The more levels are involved, the more flexible the system will be, though at the same time it will be less stable and more complicated to be analyzed. | | Likely to be many ways? | For AGI the set of relevant self-programming approaches is likely to be a much smaller set than that typically discussed in computer science, and in all likelihood much smaller than often implied in AGI. | | Architecture | The possible solutions for effective and efficient self-programming are likely to be strongly linked to what we generally think of as the //architectural structure// of AI systems, since self-programming for AGI may fundamentally have to change, modify, or partly duplicate, some aspect of the architecture of the system, for the purpose of being better equipped to perform some task or set of tasks. | \\ \\ ====Systems With Self-Programming Potential==== ^ Label ^ What ^ Example ^Description^ | [S] | State-space search | GPS (Newell et al. 1963) | The atomic actions are state-changing operators, and a program is represented as a path from the initial state to a final state. Variants of this approach include program search (examples: Gödel Machine (Schmidhuber 2006)): Given the action set A, in principle all programs formed by it can be exhaustively listed and evaluated to find an optimal one according to certain criteria. | | [P] | Production system | SOAR (Laird 1987) | Each production rule specifies the condition for a sequence of actions that correspond to a program. Mechanisms that produce new production rules, such as chunking, can be considered self-programming. | | [R] | Reinforcement learning | AIXI (Hutter 2007) | When an action of an agent changes the state of the environment, and each state has a reward value associated, a program corresponds to a policy in reinforcement learning. When the state transition function is probabilistic, this becomes a Markov decision process. | | [G] | Genetic programming | Koza’s Invention Machine (Koza et al. 2000) | A program is formed from the system’s actions, initially randomly but subsequently via genetic operators over the best performers from prior solutions, possibly by using the output of some actions as input of some other actions. An evolution process provides a utility function that is used to select the best programs, and the process is repeated. | | [I] | Inductive logic programming | Muggleton 1994 | A program is a statement with a procedural interpretation, which can be learned from given positive and negative examples, plus background knowledge. | | [E] | Evidential reasoning | NARS (Wang 2006) | A program is a statement with a procedural interpretation, and it can be learned using multi-strategy (ampliative) uncertain reasoning. | | [A] | Autocatalysis | AERA (Nivel et al. 2014) \\ & \\ Ikon Flux (Nivel 2007) | In this context the architecture is in large part comprised of a large collection of models, acting as hierarchically organized controllers, executed through a contextually-informed, continuous auto-catalytic process. New models are produced automatically, based on experience, their quality evaluated in light of this experience, and improvements produced as a result. Self-programming occurs at two levels: The lower one is concerned with performance in a set of domains, making models of how best to achieve goals in the external world at any point in time, the higher level is concerned with the operation of the lower one, implementing integrated cognitive control and meta-learning capabilities. Semantically closed auto-catalytic processes maintain the system’s growth after they are deployed. | | source | [[http://alumni.media.mit.edu/~kris/ftp/JAGI-Special-Self-Progr-Editorial-ThorissonEtAl-09.pdf|Thórisson et al. 2012]] | \\ \\ ====Design Assumptions in The Above Approaches==== | How does the system represent each basic action? | a) As an operator that transforms a state to another state, either deterministically or probably, and goal as state to be reached [R, S] \\ b) As a function that maps some input arguments to some output arguments [G] \\ c) As a realizable statement with preconditions and consequences [A, E, I, P] \\ Relevant assumptions: \\ Is the knowledge about an action complete and certain? \\ Is the action set discrete and finite? | | Can a program be used as an action in other programs? | a) Yes, programs can be built recursively [A, E, G, I] \\ b) No, a program can only contain basic actions [R, S, P] \\ Relevant assumptions: \\ Do the programs and actions form a hierarchy? \\ Can these recursions have closed loops? | | How does the system represent goals? | a) As states to be reached [S] \\ b) As values to be optimized [G, R] \\ c) As statements to be realized [E, P, A] \\ d) As functions to be approximated [I] \\ Relevant assumptions: \\ Is the knowledge about goals complete? \\ Is the knowledge about goals certain? \\ Can all the goals be reached with a concrete action set? | | Are there derived goals? | a) Yes, and they are logically dependent to the original goals [I, S, P] \\ b) Yes, and they may become logically independent to the original goals [A, E] \\ c) No, all goals are given or innate [G, R] \\ Relevant assumptions: \\ Are the goals constant or variable? \\ Are the goals externally imposed or internally generated? | | Can the system learn new knowledge about actions and goals? | a) Yes, and the learning process normally converges [G, I, R] \\ b) Yes, and the learning process may not converge [A, E, P] \\ c) No, all the knowledge are given or innate [S] \\ Relevant assumptions: \\ Are the goals constant or variable? \\ Are the actions constant or variable? | | What is the extent of resources demanded? | a) Unlimited time and/or space [I, R, S, P] \\ b) Limited time and space [A, E, G] \\ Relevant assumption: Are the resources used an attribute of the problem, or of the solution? | | When is the quality of a program evaluated? | a) After execution, according to its actual contribution [G] \\ b) Before execution, according to its definition or historical record [I, S, P] \\ c) Both of the above [A, E, R] \\ Relevant assumption: \\ Are adaptation and prediction necessary? | | source | [[http://alumni.media.mit.edu/~kris/ftp/JAGI-Special-Self-Progr-Editorial-ThorissonEtAl-09.pdf|Thórisson et al. 2012]] | \\ \\ \\ //EOF//