User Tools

Site Tools


public:t-713-mers:mers-24:causation-methodology-architecture

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t-713-mers:mers-24:causation-methodology-architecture [2024/10/29 15:16] – [Self-Programming] thorissonpublic:t-713-mers:mers-24:causation-methodology-architecture [2024/11/05 11:35] (current) thorisson
Line 12: Line 12:
  
  
-====Worlds & Regularity====+=====Worlds & Regularity=====
  
 |  Noise  | A world with no regularity is a completely unpredictable world. \\ In such worlds, learning is impossible.   | |  Noise  | A world with no regularity is a completely unpredictable world. \\ In such worlds, learning is impossible.   |
Line 27: Line 27:
 \\ \\
  
-====Causation====+=====Causation=====
  
 |  Deduction = Prediction  | Since logical correlation is sufficient to produce a prediction.    | |  Deduction = Prediction  | Since logical correlation is sufficient to produce a prediction.    |
Line 40: Line 40:
  
  
-==== Self-Programming ====+===== Self-Programming =====
 |  \\ What it is  | //Self-programming// here means, with respect to some virtual machine **M**, the production of one or more programs created by **M** itself, whose //principles// for creation were provided to **M** at design time, but whose details were //decided by// **M** at runtime, based on its //experience// | |  \\ What it is  | //Self-programming// here means, with respect to some virtual machine **M**, the production of one or more programs created by **M** itself, whose //principles// for creation were provided to **M** at design time, but whose details were //decided by// **M** at runtime, based on its //experience// |
 |  Self-Generated Program  | \\ Determined by some factors in the interaction between the system and its environment.   | |  Self-Generated Program  | \\ Determined by some factors in the interaction between the system and its environment.   |
 |  Historical note  | Concept of self-programming is old (J. von Neumann one of the first to talk about self-replication in machines). However, few if any proposals for how to achieve this has been fielded.  [[https://en.wikipedia.org/wiki/Von_Neumann_universal_constructor|Von Neumann's universal constructor on Wikipedia]]   | |  Historical note  | Concept of self-programming is old (J. von Neumann one of the first to talk about self-replication in machines). However, few if any proposals for how to achieve this has been fielded.  [[https://en.wikipedia.org/wiki/Von_Neumann_universal_constructor|Von Neumann's universal constructor on Wikipedia]]   |
 |  No guarantee  | The fact that a system has the ability to program itself is not a guarantee that it is in a better position than a traditional system. In fact, it is in a worse situation because in this case there are more ways in which its performance can go wrong.    | |  No guarantee  | The fact that a system has the ability to program itself is not a guarantee that it is in a better position than a traditional system. In fact, it is in a worse situation because in this case there are more ways in which its performance can go wrong.    |
-|  Why we need it  | The inherent limitations of hand-coding methods make traditional manual programming approaches unlikely to reach a level of a human-grade generally intelligent system, simply because to be able to adapt to a wide range of tasks, situations, and domains, a system must be able to modify itself in more fundamental ways than a traditional software system is capable of.   |+|  Why needed  | The inherent limitations of hand-coding methods make traditional manual programming approaches unlikely to reach a level of a human-grade generally intelligent system, simply because to be able to adapt to a wide range of tasks, situations, and domains, a system must be able to modify itself in more fundamental ways than a traditional software system is capable of.   |
 |  Remedy  | Sufficiently powerful principles are needed to insure against the system going rogue.    | |  Remedy  | Sufficiently powerful principles are needed to insure against the system going rogue.    |
 |  \\ The //Self// of a machine  | **C1:** The processes that act on the world and the self (via senctors) evaluate the structure and execution of code in the system and, respectively, synthesize new code. \\  **C2:** The models that describe the processes in C1, entities and phenomena in the world -- including the self in the world -- and processes in the self. Goals contextualize models and they also belong to C2. \\ **C3:** The states of the self and of the world -- past, present and anticipated -- including the inputs/outputs of the machine.  | |  \\ The //Self// of a machine  | **C1:** The processes that act on the world and the self (via senctors) evaluate the structure and execution of code in the system and, respectively, synthesize new code. \\  **C2:** The models that describe the processes in C1, entities and phenomena in the world -- including the self in the world -- and processes in the self. Goals contextualize models and they also belong to C2. \\ **C3:** The states of the self and of the world -- past, present and anticipated -- including the inputs/outputs of the machine.  |
Line 53: Line 53:
 \\ \\
  
-==== Programming for Self-Programming ====+===== Programming for Self-Programming =====
  
 |  \\ Why Self-Programming?  | Building a machine that can write (sensible, meaningful!) programs means that that machine is smart enough to **understand** (to a pragmatically meaningful level) the code it produces. If the purpose of its programming is to //become// smart, and the programming language we give to it //assumes it's smart already//, we have defeated the purpose of creating the self-programming machine that gets smarter over time, because its operation requires that its's already smart.    | |  \\ Why Self-Programming?  | Building a machine that can write (sensible, meaningful!) programs means that that machine is smart enough to **understand** (to a pragmatically meaningful level) the code it produces. If the purpose of its programming is to //become// smart, and the programming language we give to it //assumes it's smart already//, we have defeated the purpose of creating the self-programming machine that gets smarter over time, because its operation requires that its's already smart.    |
Line 65: Line 65:
 \\ \\
  
-====Levels of Self-Programming====+=====Levels of Self-Programming=====
 |  Level 1  | Level one self-programming capability is the ability of a system to make programs that exclusively make use of its primitive actions from action set.  | |  Level 1  | Level one self-programming capability is the ability of a system to make programs that exclusively make use of its primitive actions from action set.  |
 |  Level 2  | Subsumes Level 1; additionally generates new primitives.   | |  Level 2  | Subsumes Level 1; additionally generates new primitives.   |
Line 79: Line 79:
 \\ \\
  
-====Existing Systems Which Target Self-Programming====+=====Existing Systems Which Target Self-Programming=====
 ^  Label  ^  What  ^  Example  ^Description^ ^  Label  ^  What  ^  Example  ^Description^
 |  \\ [S]  |  \\ State-space search    \\ GPS (Newell et al. 1963)  | The atomic actions are state-changing operators, and a program is represented as a path from the initial state to a final state. Variants of this approach include program search (examples: Gödel Machine (Schmidhuber 2006)): Given the action set A, in principle all programs formed by it can be exhaustively listed and evaluated to find an optimal one according to certain criteria.   | |  \\ [S]  |  \\ State-space search    \\ GPS (Newell et al. 1963)  | The atomic actions are state-changing operators, and a program is represented as a path from the initial state to a final state. Variants of this approach include program search (examples: Gödel Machine (Schmidhuber 2006)): Given the action set A, in principle all programs formed by it can be exhaustively listed and evaluated to find an optimal one according to certain criteria.   |
Line 93: Line 93:
 \\ \\
  
-====Design Assumptions in The Above Approaches====+=====Design Assumptions in The Above Approaches=====
 |  \\ How does the system represent a basic action?  | a) As an operator that transforms a state to another state, either deterministically or probably, and goal as state to be reached [R, S] \\ b) As a function that maps some input arguments to some output arguments [G] \\ c) As a realizable statement with preconditions and consequences [A, E, I, P] \\ Relevant assumptions: \\ Is the knowledge about an action complete and certain? \\ Is the action set discrete and finite?   | |  \\ How does the system represent a basic action?  | a) As an operator that transforms a state to another state, either deterministically or probably, and goal as state to be reached [R, S] \\ b) As a function that maps some input arguments to some output arguments [G] \\ c) As a realizable statement with preconditions and consequences [A, E, I, P] \\ Relevant assumptions: \\ Is the knowledge about an action complete and certain? \\ Is the action set discrete and finite?   |
 |  \\ Can a program be used as an "action" in other programs?  | a) Yes, programs can be built recursively [A, E, G, I] \\ b) No, a program can only contain basic actions [R, S, P] \\ Relevant assumptions: \\  Do the programs and actions form a hierarchy? \\ Can these recursions have closed loops?  | |  \\ Can a program be used as an "action" in other programs?  | a) Yes, programs can be built recursively [A, E, G, I] \\ b) No, a program can only contain basic actions [R, S, P] \\ Relevant assumptions: \\  Do the programs and actions form a hierarchy? \\ Can these recursions have closed loops?  |
Line 107: Line 107:
  
  
-==== Predictability ====+===== Predictability =====
  
 |  What It Is  | The ability of an outsider to predict the behavior of a controller based on some information.   | |  What It Is  | The ability of an outsider to predict the behavior of a controller based on some information.   |
Line 114: Line 114:
 \\ \\
  
-====Reliability====+=====Reliability=====
  
 |  What It Is  | The ability of a machine to always return the same - or similar - answer to the same input.   | |  What It Is  | The ability of a machine to always return the same - or similar - answer to the same input.   |
Line 128: Line 128:
 \\ \\
  
-====Trustworthiness====+=====Trustworthiness=====
  
 |  What It Is  | The ability of a machine's owner to trust that the machine will do what it is supposed to do.   | |  What It Is  | The ability of a machine's owner to trust that the machine will do what it is supposed to do.   |
/var/www/cadia.ru.is/wiki/data/attic/public/t-713-mers/mers-24/causation-methodology-architecture.1730214981.txt.gz · Last modified: 2024/10/29 15:16 by thorisson

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki