Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revision | |
public:t-709-aies-2025:aies-2025:classification_control_autonomy [2025/08/24 16:54] – [Meaning] thorisson | public:t-709-aies-2025:aies-2025:classification_control_autonomy [2025/08/24 17:09] (current) – thorisson |
---|
| |
| |
| \\ |
| \\ |
| |
| ==== Some Open Questions About Meaning, Understanding, Autonomy, & Responsibility ==== |
| |
| | Meaning | **True or False?**: To create meaning relevant to itself in a particular situation, a cognitive system (a special kind of computing system) must be able to predict the effect and side-effects of any potential action and event in the physical world that has relevance to this situation. It must be able to //understand// these effects and side-effects. | |
| | Understanding | To understand any event and/or situation in relation to itself and/or others, a cognitive system must be able to unify relevant aspects of such situations to its own past, present or future. | |
| | Autonomy | To have "full autonomy" (or "near-full autonomy"), a cognitive system must be able to relate its meaning generation and understanding to its situation and goals (whether these are given to it by the designers or evolved to have them), as well as others' goals. \\ Do any machines yet exist that can be said to have "full autonomy"? | |
| | Responsibility | We consider a cognitive system to be ' worthy of responsibility' for a particular process if that system can be trusted to deflect most reasonable threats to that process that could come up. \\ What kinds of cognitive systems can be trusted with responsibility for human life? | |
| | Responsibility | If no machines do yet exist that create meaning, have understanding, or harbor "full autonomy" (or "near-full autonomy"), can we trust any machines -- as of yet -- for their own behavior, or some important responsibilities? | |
| |
\\ | \\ |
\\ | \\ |
\\ | \\ |
2024(c)K.R.Thórisson | 2025(c)K.R.Thórisson |
| |