[[http://cadia.ru.is/wiki/public:t-720-atai:atai-16:main|T-720-ATAI-2016 Main]] =====T-720-ATAI-2016===== ====Lecture Notes, F-12 26.02.2016==== \\ \\ \\ \\ ====Belusov-Zhaboutinsky Reaction==== | {{public:t-720-atai:250px-the_belousov-zhabotinsky_reaction.gif}} | | Simulated Belousov-Zhabotinsky Reaction. [[https://en.wikipedia.org/wiki/Belousov–Zhabotinsky_reaction|Source: Wikipedia]] | \\ \\ ====Belusov-Zhaboutinsky Reaction==== | What it is | A chemical reaction discovered in 1950. | | Why it's important | Great visual example of the kind of emergent patterns can be created through auto-catalysis (chemical in this case). One of the first (the first?) scientifically published example of emergence identified as such. | | Real version on Youtube | https://www.youtube.com/watch?v=IBa4kgXI4Cg \\ https://www.youtube.com/watch?v=3JAqrRnKFHo \\ https://www.youtube.com/watch?v=4y3uL5PRsZw&feature=related | \\ \\ ====How the Belusov-Zhaboutinsky Reaction Works==== | {{public:t-720-atai:zhabotinsky-reaction-1.png?400|Belousov-Zhabotinsky Reaction}} | | A Belousov–Zhabotinsky reaction, or BZ reaction, is one of a class of reactions that serve as a classical example of non-equilibrium thermodynamics, resulting in the establishment of a nonlinear chemical oscillator. [[https://en.wikipedia.org/wiki/Belousov–Zhabotinsky_reaction|Wikipedia]] | \\ \\ \\ \\ ====Cellular Automata==== | What it is | An algorithmic way to program interaction between (large numbers of) rule-determined "agents" or cells. [[https://en.wikipedia.org/wiki/Cellular_automaton|Wikipedia]] | | Why it's important | Powerful method to explore the concept of emergence. Also used for simulating the evolution of complex systems. | | Explicates | Interaction of rules. | | Typical manifestation | 1D or 2D grid with cell behavior governed by rules of interaction. Each cell has a scope of what it "sees" (its range of "causal ties"). | \\ \\ ====CA Example 1==== | {{public:t-720-atai:emergence-fig.jpg}} | | In this example | | **Green --> Brown IF one or more are //true://** \\ * There are more than 20 green patches around and lifetime exceeds 30 \\ * There are less than 12 green patches around and lifetime exceeds 20 \\ * The number of surrounding green patches > 25 \\ * Lifetime > 60 ticks \\ **Brown --> Green IF both are //true//:** \\ * Number of surrounding green patches > 8 and heir lifetime combined > 80 \\ * Number of surrounding brown patches > 10 | \\ \\ ==== Stephen Wolfram's CA Work==== | CA | http://mathworld.wolfram.com/CellularAutomaton.html | | Book | A New Kind of Science. | | Why it's important | Major analysis of rules for 1-D CAs. Most comprehensive work on CAs to date. | | Rule 30 | [[https://en.wikipedia.org/wiki/Rule_30|Wikipedia]] | \\ \\ ====Symbols, Meaning & Understanding==== | What are Symbols? | Peirce's Theory of Semiotics (signs) proposes 3 parts to a sign: a //sign/symbol//, an //object//, and an //interpretant//. Example of symbol: an arbitrary pattern, e.g. a written word. Example of object: an automobile. Example of interpretant: what you see in your mind's eye when you read the word "automobile". The last part is the most complex thing, because obviously what you see and I see when we read the word "automobile" is probably not exactly the same. | | "Symbol" | Peirce used various terms for this, including "sign", "representamen", "representation", and "ground". Others have suggested "sign-vehicle". What is mean in all cases is a pattern that can be used to stand for something else, and thus requires an interpretation to be used as such. | | Peirce's Innovation | Detaching the symbol/sign from the object signified, and introducing the interpretation process as a key entity. This makes it possible to explain why people misunderstand each other, and how symbols and meaning can grow and change in a culture. | | Understanding | Understanding of a particular phenomenon phi is the potential to perform actions and answer questions with respect to phi. Example: Is an automobile heavier or lighter than a human? For this computational models can be used. | | Meaning | This is far from settled, and philosophers are still grappling with the topic. It is of course highly relevant to AI, especially AGI. | | Current Approach | Meaning stems from two main sources. Firstly, acquired and tested models form a graph of relations; the comprehensiveness of this graph determines the level of understanding that the models can support with respect to a particular phenomenon. Meaning is not possible without (some level of) understanding. Secondly, meaning comes from the context of the usage of symbols, where the context is provided by (a) who/what uses the symbols, (b) in what particular task-environment, using ( c) what particular //syntactic constraints//. | | Prerequisites for using symbols | A prerequisite for communication is a shared interpretation method, shared interpretation of syntax (context), and shared knowledge (object). | | Where the Symbols "are" | When we use the term "symbol" in daily conversation we typically are referring to its //meaning//, not its form (sign). The meaning of symbols emerges from the interpretation process which is triggered by the contextual use of a sign: A sign's relation to forward models, in the pragmatic and syntactic context, produces a meaning - that which is //signified//. Thus, more than being "stored in a database", symbols are continuously and dynamically being "computed based on knowledge". |