User Tools

Site Tools


public:t720-atai-2012:requirements

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
public:t720-atai-2012:requirements [2013/07/18 01:00] thorissonpublic:t720-atai-2012:requirements [2024/04/29 13:33] (current) – external edit 127.0.0.1
Line 16: Line 16:
 The capabilities of a human mind can be broadly classified into two kinds along the dimensions of (a) **knowing what** and (b) **knowing how**. The former is generally thought of as "facts", but could also be said to be truth statements about the world; the second has traditionally been connected more with robotics, and is about control. It is when both of these are combined in one system, and properly coordinated, that we get a very powerful system for doing all sorts of real-world tasks. It is in these kinds of systems where we start to see the kind of generality associated with human minds. Of course, as we mentioned already, every intelligent system must have some minimum of "knowing how", since it must be able to act in the world; for the purposes of painting broad strokes in the present discussion we can ignore such obvious issues inherent in the classification. But some might argue that the classification is bogus, because both are obviously needed. I tend to agree to some extent. A control system that cannot do any kind of reasoning is going to be very limited, as it will probably lose out on the "G" in "AGI". But conversely, a system that can only do reasoning (as we know it from e.g. academic and scientific work) will can never be expected to learn how to control a time-bounded activity in a complex world, as many never have the control capabilities called for. It is only if we stretch these concepts beyond their very "decent" limits, as used in everyday language, that we can agree to either extreme being sufficient for achieving AGI, e.g. saying that inventing a control system for controlling e.g. a hexapod body in the swamps can be done via reasoning-only, as long as it is combined with some sort of advanced self-programming capabilities. This stretches at least my own understanding of what "reasoning" is generally used to mean. Conversely, we might try to argue that implementing advanced control systems capable of some sort of self-description and reasoning could get us away from having to impart reasoning to the system from the outset -- in which case we would only have replaced definitions with tautology.  The capabilities of a human mind can be broadly classified into two kinds along the dimensions of (a) **knowing what** and (b) **knowing how**. The former is generally thought of as "facts", but could also be said to be truth statements about the world; the second has traditionally been connected more with robotics, and is about control. It is when both of these are combined in one system, and properly coordinated, that we get a very powerful system for doing all sorts of real-world tasks. It is in these kinds of systems where we start to see the kind of generality associated with human minds. Of course, as we mentioned already, every intelligent system must have some minimum of "knowing how", since it must be able to act in the world; for the purposes of painting broad strokes in the present discussion we can ignore such obvious issues inherent in the classification. But some might argue that the classification is bogus, because both are obviously needed. I tend to agree to some extent. A control system that cannot do any kind of reasoning is going to be very limited, as it will probably lose out on the "G" in "AGI". But conversely, a system that can only do reasoning (as we know it from e.g. academic and scientific work) will can never be expected to learn how to control a time-bounded activity in a complex world, as many never have the control capabilities called for. It is only if we stretch these concepts beyond their very "decent" limits, as used in everyday language, that we can agree to either extreme being sufficient for achieving AGI, e.g. saying that inventing a control system for controlling e.g. a hexapod body in the swamps can be done via reasoning-only, as long as it is combined with some sort of advanced self-programming capabilities. This stretches at least my own understanding of what "reasoning" is generally used to mean. Conversely, we might try to argue that implementing advanced control systems capable of some sort of self-description and reasoning could get us away from having to impart reasoning to the system from the outset -- in which case we would only have replaced definitions with tautology. 
  
-According to the preceding analysis it is not sufficient to refer only to reasoning when trying to define what is intelligent and what is not, as reasoning alone will not account for the many necessary control functions that can be fond in a human mind -- attention being prime among them. Conversely, an advanced control system devoid of reasoning capabilities -- the ability to abstract, analyze, and adjust itself -- will likely never reach the advanced architectural sophistication required for AGIs. While it may seem that by talking about growth we are diverting the attention to something unrelated. But no. This discussion actually becomes much simpler by introducing the requirement of **growth capability** into our AGI-system-to-be: Assuming that any and all AGI systems, to be able to meet the high demands of multiple -- a-priori unknown -- environments, must be capable of advanced levels of **self-reorganization**, removes the conceptual shortcomings associated with trying to understand (and define) intelligence only based on a particular limited viewpoint. This argument is not very difficult to uphold, as anyone can see that a system that has trained itself to be good at some complex task under some particular conditions must be significantly handicapped if moved to another environment. Think underwater versus desert; jungle versus outer space. While some of the task's root goals may be the same, the majority of sub-goals may in fact the vastly different. The greater the difference between two or more environments and tasks to be learned, the greater the difference between the state of the system before and after it has mastered both/all. +According to the preceding analysis it is not sufficient to refer only to reasoning when trying to define what is intelligent and what is not, as reasoning alone will not account for the many necessary control functions that can be fond in a human mind -- attention being prime among them. Conversely, an advanced control system devoid of reasoning capabilities -- the ability to abstract, analyze, and adjust itself -- will likely never reach the advanced architectural sophistication required for AGIs. It may seem that by talking about growth we are diverting the attention to something unrelated. But no. This discussion actually becomes much simpler by introducing the requirement of **growth capability** into our AGI-system-to-be: Assuming that any and all AGI systems, to be able to meet the high demands of multiple -- a-priori unknown -- environments, must be capable of advanced levels of **self-reorganization**, removes the conceptual shortcomings associated with trying to understand (and define) intelligence only based on a particular limited viewpoint. This argument is not very difficult to uphold, as anyone can see that a system that has trained itself to be good at some complex task under some particular conditions must be significantly handicapped if moved to another environment. Think underwater versus desert; jungle versus outer space. While some of the task's root goals may be the same, the majority of sub-goals may in fact the vastly different. The greater the difference between two or more environments and tasks to be learned, the greater the difference between the state of the system before and after it has mastered both/all. 
  
 Architectural self-reorganization, a.k.a. **self-programming**, is in fact a hallmark of intelligence, and it is quite straight-forward to map this concept onto a diverse set of systems, such as thermostats (no self-programming) and humans (some self-programming). A system that can get better at some task <m>X</m> is called a **learning system**, or a system capable of learning. A system that can get better at getting better -- in other words //learn to learn// -- is a system capable of meta-learning. This is a system whose architecture is capable of **growth**. Humans are an example implementation of such a system. While metalearning is not strictly necessary for cognitive growth – others being for example the ability to learn things relatively different from what has been learned before through effective application of analogies, and the ability to continuously grow ones own knowledge – metalearning is perhaps the most powerful of functions enabling cognitive growth. Architectural self-reorganization, a.k.a. **self-programming**, is in fact a hallmark of intelligence, and it is quite straight-forward to map this concept onto a diverse set of systems, such as thermostats (no self-programming) and humans (some self-programming). A system that can get better at some task <m>X</m> is called a **learning system**, or a system capable of learning. A system that can get better at getting better -- in other words //learn to learn// -- is a system capable of meta-learning. This is a system whose architecture is capable of **growth**. Humans are an example implementation of such a system. While metalearning is not strictly necessary for cognitive growth – others being for example the ability to learn things relatively different from what has been learned before through effective application of analogies, and the ability to continuously grow ones own knowledge – metalearning is perhaps the most powerful of functions enabling cognitive growth.
Line 30: Line 30:
 We are not discussing what features allow us //comfortably to define something// to be "human-level" -- now in the realm of what is //functionally// necessary for an A.I. system to become human-level or AGI: What are the key functions that a human mind implements that make it so different from -- and in some ways more valuable than -- other animal intelligence and narrow A.I. systems? The human mind is in some way "general" over and above what animal and narrow A.I. systems are -- and it is this we want to achieve in our AGI.  We are not discussing what features allow us //comfortably to define something// to be "human-level" -- now in the realm of what is //functionally// necessary for an A.I. system to become human-level or AGI: What are the key functions that a human mind implements that make it so different from -- and in some ways more valuable than -- other animal intelligence and narrow A.I. systems? The human mind is in some way "general" over and above what animal and narrow A.I. systems are -- and it is this we want to achieve in our AGI. 
  
-There are some functions that are frequently brought up as candidates for being a hallmark of human-level intelligence, and which it would seem prudent to address. Listing these in no particular order: Creativity, inventiveness, insight, intuition, imagination, reasoning with uncertainty, experimentation, calculated risks. Some of these may ultimately be highly desired and //necessary// features of AGIs, others may become optional or adjustable to varying degrees, depending on what we want to use our AGI for. The way these terms are used in everyday language makes it reasonable to assume that imparting them to an artificial system would be of great value. And there probably are more that are worthy of listing here, but let's stick with these.+There are some functions that are frequently brought up as candidates for being a hallmark of human-level intelligence, and which it would seem prudent to address. Listing these in no particular order: Creativity, inventiveness, insight, intuition, imagination, reasoning with uncertainty, experimentation, calculated risks, curiosity. Some of these may ultimately be highly desired and //necessary// features of AGIs, others may become optional or adjustable to varying degrees, depending on what we want to use our AGI for. The way these terms are used in everyday language makes it reasonable to assume that imparting them to an artificial system would be of great value. And there probably are more that are worthy of listing here, but let's stick with these.
  
 "Creativity" is a concept that has been thrown around for centuries and there are at least 20 different definitions available in the literature for this term. In general, being creative is (at least) the ability to come up with non-obvious and novel (to varying degrees) ideas, solutions, suggestions, etc. To be termed "creative" a solution cannot be random -- creativity will not be ascribed to a randomization process -- and it cannot be obvious either. There are at least two ways to assess //obviousness//. First, in light of what other minds from a group of minds have been or are able to come up with -- a population-based measure. The other is in light of what can be deduced, or induced with not too much effort, by a single cognitive system, based on its available information and knowledge. One way to quantify an individual mind's progress on the creativity spectrum is, other things being equal (the difficulty of problems being solved and the rate of idea generation), are the solutions and ideas the mind is producing improving in quality? Clearly, if we want to build an artificial //general// intelligence it would behoove us to require it having at least some minimum ability to come up with non-obvious solutions to problems we presented it with: An AGI should be creative. Some might venture to argue that AGI cannot be achieved without creativity. That does not mean, however, that creativity must be "manually imparted" or "force-fed" to the AGI -- it could just as well be that **creativity** is a natural corrolary to intelligence, possibly resulting from intelligence and creativity relying on precisely the same underlying mechanisms; in which case it would be a natural impossibility to build a non-creative AGI.  "Creativity" is a concept that has been thrown around for centuries and there are at least 20 different definitions available in the literature for this term. In general, being creative is (at least) the ability to come up with non-obvious and novel (to varying degrees) ideas, solutions, suggestions, etc. To be termed "creative" a solution cannot be random -- creativity will not be ascribed to a randomization process -- and it cannot be obvious either. There are at least two ways to assess //obviousness//. First, in light of what other minds from a group of minds have been or are able to come up with -- a population-based measure. The other is in light of what can be deduced, or induced with not too much effort, by a single cognitive system, based on its available information and knowledge. One way to quantify an individual mind's progress on the creativity spectrum is, other things being equal (the difficulty of problems being solved and the rate of idea generation), are the solutions and ideas the mind is producing improving in quality? Clearly, if we want to build an artificial //general// intelligence it would behoove us to require it having at least some minimum ability to come up with non-obvious solutions to problems we presented it with: An AGI should be creative. Some might venture to argue that AGI cannot be achieved without creativity. That does not mean, however, that creativity must be "manually imparted" or "force-fed" to the AGI -- it could just as well be that **creativity** is a natural corrolary to intelligence, possibly resulting from intelligence and creativity relying on precisely the same underlying mechanisms; in which case it would be a natural impossibility to build a non-creative AGI. 
Line 40: Line 40:
 "Reasoning with uncertainty" has a bit of a different flavor than the other terms. In general, "reasoning" refers to an ability to use logic -- in some way -- to come up with conclusions based on particular premises. The various types of reasoning, which we will discuss in more detail in a later section, shows that there are a lot more ways to use reasoning than for simple deduction (Socrates is a man; all men are mortal; hence, Socrates is mortal). Deduced knowledge is "inevitable knowledge", because the conclusions derive directly from the premises. So in some sense deduction is the least interesting use of reasoning. But long deduction chains can have some interesting and unexpected results, and it can be argued that ordinarily people do not do enough deduction in their daily life, as for most people at least one paradox in their behavior, when considering their views, can be found in their behavior every day (take, for example, the person who wants to be 'generous' yet supports no third-world fund). Deduction is essentially the only reasoning that one does with //full certainty//. All other kinds of reasoning involve some uncertainty, to varying extents. Of course, to be useful, an AGI would need to be logical, to the fullest extent possible. Unfortunately it is difficult to say what that extent is or will be. Two kinds of reasoning that we most certainly would want our AGI to be capable of are **abduction** and **induction**. The former refers to the ability to infer causes from observations, e.g. because the grass is wet, it may have rained yesterday". Induction is essentially the primary basis for scientific inquiry, the ability to generalize from observations. No concrete proposals exist for how to imbue such a skill into an artificial entity, although plenty of ideas have been fielded.  "Reasoning with uncertainty" has a bit of a different flavor than the other terms. In general, "reasoning" refers to an ability to use logic -- in some way -- to come up with conclusions based on particular premises. The various types of reasoning, which we will discuss in more detail in a later section, shows that there are a lot more ways to use reasoning than for simple deduction (Socrates is a man; all men are mortal; hence, Socrates is mortal). Deduced knowledge is "inevitable knowledge", because the conclusions derive directly from the premises. So in some sense deduction is the least interesting use of reasoning. But long deduction chains can have some interesting and unexpected results, and it can be argued that ordinarily people do not do enough deduction in their daily life, as for most people at least one paradox in their behavior, when considering their views, can be found in their behavior every day (take, for example, the person who wants to be 'generous' yet supports no third-world fund). Deduction is essentially the only reasoning that one does with //full certainty//. All other kinds of reasoning involve some uncertainty, to varying extents. Of course, to be useful, an AGI would need to be logical, to the fullest extent possible. Unfortunately it is difficult to say what that extent is or will be. Two kinds of reasoning that we most certainly would want our AGI to be capable of are **abduction** and **induction**. The former refers to the ability to infer causes from observations, e.g. because the grass is wet, it may have rained yesterday". Induction is essentially the primary basis for scientific inquiry, the ability to generalize from observations. No concrete proposals exist for how to imbue such a skill into an artificial entity, although plenty of ideas have been fielded. 
  
-A primary way to test generalizations derived via induction is experimentation -- another key method of modern science. Any generalization will imply non-obvious predictions whose truth value is unknown; by testing these predictions we can confirm or disprove the generalization. Continued failures to disprove a generalization, assuming the generalization is logical, provides support for its usefulness (and in some sense correctness); a single disproving result will of course invalidate the generalization. It may, however, continue to be useful, as can be seen in the continued use of Newtonian physics, even though Einstein provided a more correct theory of physics which subsumes Newton's. Taking calculated risks lies, in a way, on top of all the prior concepts we have covered -- applying knowledge, reasoning, creativity, inventiveness, and experimentation, one could enable an artificial system to take "calculated" (really //informed//) risks. Such behavior is certainly observed in humans, and may become useful for realizing the full potential of AGIs. However, taking calculated risks is a bit more murky a concept than most of the others, and it may be difficult to operationalize, certainly it will be difficult to make this a particular goal of building an AGI -- to build an AGI capable of taking calculated risks. For now we will assume that taking calculated risks -- in the most general and obvious interpretation of that concept -- is likely to be an emergent property of most or all future AGIs, as a function of the fact that they most likely //will// be asked to do novel things that nobody has done before, and therefore inherently require behavior of the kind that could be given that label. +A primary way to test generalizations derived via induction is experimentation -- another key method of modern science. Any generalization will imply non-obvious predictions whose truth value is unknown; by testing these predictions we can confirm or disprove the generalization. Continued failures to disprove a generalization, assuming the generalization is logical, provides support for its usefulness (and in some sense correctness); a single disproving result will of course invalidate the generalization. It may, however, continue to be useful, as can be seen in the continued use of Newtonian physics in spite of Einstein providing a more correct theory of physics which subsumes it. Taking calculated risks lies, in a way, on top of all the prior concepts we have covered -- applying knowledge, reasoning, creativity, inventiveness, and experimentation, one could enable an artificial system to take "calculated" (really //informed//) risks. Such behavior is certainly observed in humans, and may become useful for realizing the full potential of AGIs. However, taking calculated risks is a bit more murky a concept than most of the others, and it may be difficult to operationalize, certainly it will be difficult to make this a particular goal of building an AGI -- to build an AGI capable of taking calculated risks. For now we will assume that taking calculated risks -- in the most general and obvious interpretation of that concept -- is likely to be an emergent property of most or all future AGIs, as a function of the fact that they most likely //will// be asked to do novel things that nobody has done before, and therefore inherently require behavior of the kind that could be given that label. 
  
-What about emotion? Emotions are certainly a real phenomenon. There are at least two sides to the emotion coin that we must address. First, emotions have an experiential component that most peoplewhen using the word, associate strongly with it. The experience of feeling sad, of guilt, pain, despair, anger, frustration -- these are typically felt by every person, to some extent, at least once per year, and in many cases much more often. It can easily be argued -- but we won't spend much space on it here -- that it is only the latter part of the role of emotion that is relevant to AGI. As Chalmers has convincingly argued in his thought experiments, it is not difficult to imagine a zombie that feels nothing, yet whose behavior is indistinguishable from that of any human. This is because the only knowledge anyone has of experience is their own experience. When someone tells me they feel pain I have to believe them -- I only have their word and their behavior to judge from, I cannot possibly feel //their// pain, only my own. Therefore, if everyone around me were a really amazing actor, for all I know the only person on the planet who actually //experiences// pain is me. The role of this experience in actually controlling behavior has been debated for decades; what most agree on, however, is that the //effect// of emotions on behavior can be cast in a control paradigm: emotions have a role in affecting the way we act, think, and even perceive the world. For the purposes of AGI -- since the focus of the present quest is not experience per se but intelligence -- we can ignore the experiential part of emotions and focus on the control part. What is the control exerted by emotion in natural cognition? One primary effect that has been discerned is what has been called "focusing of attention" -- the steering of our intake of information (and thus what we spend our time thinking about). This effect is often encountered in conditions of stress, frustration and anger. Another is attending to our bodily health -- the most obvious example being when we are in physical pain. Emotions seem to control the consolidation of memories -- high emotional states tend induce a stronger memorization than states of relaxation. All of these are undoubtedly useful heuristics for evolving and growing up in nature; whether we will end up wanting our AGI to have all of these, or some of these, remains to be seen. For now we will assume that any reasonably powerful cognitive architecture control system should be able to realize such functions. Of course, to implement emotion-like control, the architecture must also be capable of reasonably sophisticated contextual recognition, since rational evocation emotional control functions always relies on the juxtaposition of an agent with its environment. +Curiosity is likely to exist in animals because all intelligent systems must continuously deal with uncertain information -- there is now guarantee, for instance, that the stairs to my attic won't break as I ascend, yet I will not spend all morning checking every single joint, every single fiber in the wood, and every single nail, to make absolutely sure, even though this means I //might// fall and break my leg in the process. Curiosity helps fill in the gaps of our missing knowledge -- why is that reflection so strangely peculiar? we may think, as we walk through a revolving chrome-and-glass door, an act that may very well result in new knowledge that helps us later avoid walking into it and hurting ourselves.  
 + 
 +What about emotion? Emotions are certainly a real phenomenon. There are at least two sides to the emotion coin that we must address. First, emotions have an //experiential// component that most people think of when using the word. The experience of //feeling// sad, of //feeling// guilt, pain, despair, anger, frustration -- these are typically experienced by every person, to some extent, at least once per year, and in many cases much more often. Second, there is the //effect// part -- the shoe flying towards me as my classmate, his face twisted with anger, takes it out on me. I see his facial expression, I feel the shoe hit my head -- but I don't expeirence //his// emotion directly. This may of course create other emotions in myself, but these are mine, not his. It can easily be argued -- but we won't spend much space on it here -- that it is only the latter //effectual// part of emotions that is relevant to AGI. As Chalmers has convincingly argued in his thought experiments, it is not difficult to imagine a zombie that feels nothing, yet whose behavior is indistinguishable from that of any human. This is because the only knowledge anyone has of experience is their own experience. When someone tells me they feel pain I have to believe them -- I only have their word and their behavior to judge from, I cannot possibly feel //their// pain, only my own. Therefore, if everyone around me were a really amazing actor, for all I know the only person on the planet who actually //experiences// pain is me. The role of this experience in actually controlling behavior has been debated for decades; what most agree on, however, is that the //effect// of emotions on behavior can be cast in a control paradigm: emotions have a role in affecting the way we act, think, and even perceive the world. For the purposes of AGI -- since the focus of the present quest is not experience per se but intelligence -- we can ignore the experiential part of emotions and focus on the control part. What is the control exerted by emotion in natural cognition? One primary effect that has been discerned is what has been called "focusing of attention" -- the steering of our intake of information (and thus what we spend our time thinking about). This effect is often encountered in conditions of stress, frustration and anger. Another is attending to our bodily health -- the most obvious example being when we are in physical pain. Emotions seem to control the consolidation of memories -- high emotional states tend induce a stronger memorization than states of relaxation. All of these are undoubtedly useful heuristics for evolving and growing up in nature; whether we will end up wanting our AGI to have all of these, or some of these, remains to be seen. For now we will assume that any reasonably powerful cognitive architecture control system should be able to realize such functions. Of course, to implement emotion-like control, the architecture must also be capable of reasonably sophisticated contextual recognition, since rational evocation emotional control functions always relies on the juxtaposition of an agent with its environment. 
  
 Is natural language necessary for AGIs? There can be no doubt that natural language has a high level of utility for any artificial system -- being able to ask an intelligent system in plain language how it plans to achieve a particular task, to summarize its operations for the past month in a few sentences, or report on its current status, and get a reply back in concise and chiseled language could come in handy on many occasions. Whether it is //necessary// for an AGI to be language-capable is a different question; some go so far as to argue that (natural) language is necessary for any system capable of symbol manipulation and higher-level thinking. This is, in my opinion, an empirical question for which at present there is not sufficient evidence to argue convincingly either way. Certainly for the kinds of tasks that some dogs, apes, crows, and horses are capable of doing have a high cognitively functional overlap with humans and would seem to be worth of being called symbol manipulation under even the most stringent definition of that term. We could probably spend considerable space speculating on the importance of language to social interaction and "thinking in groups". Because to a large extent a human individual's cognitive feats, such as inventing a new mathematics or deciphering self-sustaining processes in living systems, are dependent on their socio-historic environment. There can be little doubt that had Turing lived in the bronze age he would not have had the tools or societal context to come up with his ideas about computers. The primary way for allowing such effects in society is via natural language -- so presumably, if we wanted to replicate similar effects with AGIs we //might// have to give them the ability to communicate at levels that are at least as efficient as natural language, and there might be even better ways that we could invent. Suffice it to say at this point that it is possible that certain kinds of thinking are difficult to do, impractical, or even perhaps impossible, without (natural) language -- and that there is almost certainly a large set of functions and capabilities that a system might posses which do //not// -- strictly speaking -- require language, yet are still both necessary and sufficient for an A.I. to deserve being called //AGI// Is natural language necessary for AGIs? There can be no doubt that natural language has a high level of utility for any artificial system -- being able to ask an intelligent system in plain language how it plans to achieve a particular task, to summarize its operations for the past month in a few sentences, or report on its current status, and get a reply back in concise and chiseled language could come in handy on many occasions. Whether it is //necessary// for an AGI to be language-capable is a different question; some go so far as to argue that (natural) language is necessary for any system capable of symbol manipulation and higher-level thinking. This is, in my opinion, an empirical question for which at present there is not sufficient evidence to argue convincingly either way. Certainly for the kinds of tasks that some dogs, apes, crows, and horses are capable of doing have a high cognitively functional overlap with humans and would seem to be worth of being called symbol manipulation under even the most stringent definition of that term. We could probably spend considerable space speculating on the importance of language to social interaction and "thinking in groups". Because to a large extent a human individual's cognitive feats, such as inventing a new mathematics or deciphering self-sustaining processes in living systems, are dependent on their socio-historic environment. There can be little doubt that had Turing lived in the bronze age he would not have had the tools or societal context to come up with his ideas about computers. The primary way for allowing such effects in society is via natural language -- so presumably, if we wanted to replicate similar effects with AGIs we //might// have to give them the ability to communicate at levels that are at least as efficient as natural language, and there might be even better ways that we could invent. Suffice it to say at this point that it is possible that certain kinds of thinking are difficult to do, impractical, or even perhaps impossible, without (natural) language -- and that there is almost certainly a large set of functions and capabilities that a system might posses which do //not// -- strictly speaking -- require language, yet are still both necessary and sufficient for an A.I. to deserve being called //AGI//
/var/www/cadia.ru.is/wiki/data/attic/public/t720-atai-2012/requirements.1374109241.txt.gz · Last modified: 2024/04/29 13:33 (external edit)

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki