The Joint Workshop on Architectures and Evaluation for Generality, Autonomy and Progress in AI (AEGAP) focuses on our field's original grand dream: the creation of cognitive autonomous agents with general intelligence that matches (or exceeds) that of humans. We want AI that understands its users and their values so it we can form beneficial and satisfying relationships with them.
In 2018, it is about three decades since John McCarthy published a new version of his 1971 Turing Award Lecture on “Generality in Artificial Intelligence”. Since he coined the term "Artificial Intelligence", the field has come a long way. Progress has certainly been made as AI grew from a niche science to a multi-billion dollar endeavor that solves many tasks and a household term that is often viewed to be the future of everything. However, it is not clear how much progress has been made exactly, and especially with respect to AI's grand dream.
As the task turned out to be more difficult than anticipated in the 1950s, a divide-and-conquer approach was adopted that has resulted in a very successful but fractured field. AEGAP aims to bring together researchers from different sub-disciplines to discuss how the different approaches and techniques can contribute to the goal of building beneficial AI with high levels of generality and autonomy. To achieve this goal we will likely need to build large-scale, complex and dynamic architectures that can integrate bottom-up and top-down approaches. One hopeful avenue may be to combine logic- or rule-based top-down approaches with neuroscience-inspired bottom-up approaches, so that intelligence might emerge from their interplay.
This cannot be done without methods for evaluating the different approaches to AI as they exist now and are developed in the future. While we can readily see the performance of AI systems in specific domains, it is more difficult to assess progress in AI, ML and autonomous agents when we put the focus on generality and autonomy. Real progress in this direction only takes place when a system exhibits enough autonomous flexibility to find a diversity of solutions for a range of tasks, some of which may not be known until after the system is deployed. Many evaluation platforms exist (see here), but open research questions remain about how to define batteries or curricula of tasks that capture notions such as generality, transfer or learning to learn, with gradients of difficulty that actually represent the progress we want to make in several directions. The question of fully autonomous reproducibility must also be understood as the goals become more open and general.
We welcome regular papers, short papers, demo papers about benchmarks or tools, and position papers, and encourage discussions over a broad list of topics. As AEGAP is the result of a merger between the Third Workshop on Evaluating Generality and Progress in Artificial Intelligence (EGPAI), the Second Workshop on Architectures for Generality & Autonomy (AGA) and the First Workshop on General AI Architecture of Emergence and Autonomy (AAEA), we are interested in submissions on both evaluation and architectures:
Oren Etzioni is Chief Executive Officer of the Allen Institute for Artificial Intelligence. He has been a Professor at the University of Washington's Computer Science department since 1991, receiving several awards including Seattle's Geek of the Year (2013), the Robert Engelmore Memorial Award (2007), the IJCAI Distinguished Paper Award (2005), AAAI Fellow (2003), and a National Young Investigator Award (1993). He has been the founder or co-founder of several companies, including Farecast (sold to Microsoft in 2008) and Decide (sold to eBay in 2013). He has written commentary on AI for The New York Times, Nature, Wired, and the MIT Technology Review. He helped to pioneer meta-search (1994), online comparison shopping (1996), machine reading (2006), and Open Information Extraction (2007). He has authored over 100 technical papers that have garnered over 1,800 highly influential citations on Semantic Scholar. He received his Ph.D. from Carnegie Mellon University in 1991 and his B.A. from Harvard in 1986.
Abstract: In a world where Google, Facebook, and others possess massive proprietary data sets, and unprecedented computational power—how is a graduate student to make a dent in the universe? I’ll address this conundrum by re-visiting one of the holy grails of AI: acquiring, representing, and utilizing common-sense knowledge. Can we leverage modern methods including deep learning, NLP, and crowd sourcing to build AI systems that are more general, more robust to adversarial examples, and more data efficient than today’s AI savants?
Tadahiro Taniguchi has been a Professor at the College of Information Science and Engineering, Ritsumeikan University since 2017. He has also been a Visiting General Chief Scientist, AI solution center, Panasonic since 2017. He has been engaged in research on machine learning, cognitive robotics, emergent systems, and intelligent vehicles. Main focus his research is about symbol emergence in robotics, ranging from behavioral learning to language acquisition. From September 2015 to September 2016, he was a Visiting Associate Professor at Imperial College London. He serves on the editorial board of Advanced Robotics and Japanese Society of Artificial Intelligence. He is also a member of the Cognitive and Developmental Systems Technical Committee, and the Emergent Technologies Technical Committee of the IEEE. He received the ME and Ph.D. degrees from Kyoto University, in 2003 and 2006, respectively.
Abstract: Computational models that can reproduce human developmental and long-term learning processes have been widely explored. However, we have not obtained a computational model that enables a robot to learn internal representation systems and linguistic communication capabilities, i.e. symbol systems, automatically in the real-world environment. Symbol emergence in robotics is a research field in which cognitive models that form symbol or representation systems in a bottom-up manner. In this talk, I will talk about symbol emergence in robotics and its related topics. I introduce nonparametric hierarchical Bayesian methods for unsupervised word discovery, spatial concept formation, and perceptual category formation. To create an embodied developmental general artificial intelligence, we need an appropriate architecture to integrate many unsupervised-learning-based cognitive modules. I also talk about our approach toward cognitive architecture.
The program will consist of invited talks, contributed talks, and group discussions. The order of the contributed talks may be subject to change.
Time | Event | |||
---|---|---|---|---|
8:00-8:30 | Registration | |||
8:30-8:45 | Welcome | |||
8:45-9:45 | Invited Talk: Oren Etzioni | |||
9:45-10:00 |
Contributed Talk
| |||
10:00-10:30 | Coffee break | |||
10:30-11:15 |
Contributed Talks
| |||
11:15-11:45 | Panel Discussion Are consciousness and self the missing ingredients for true generality and autonomy? | |||
11:45-12:30 |
Contributed Talks
| |||
12:30-14:00 | Lunch | |||
14:00-15:00 | Invited Talk: Tadahiro Taniguchi | |||
15:00-15:30 |
Contributed Talks
| |||
15:30-16:00 | Coffee break | |||
16:00-16:30 |
Contributed Talks
| |||
16:30-16:55 | OpenNARS Demo | |||
16:55-17:25 |
Contributed Talks
| |||
17:25-17:55 | Group Discussion | |||
17:55-18:00 | Closing Words |
Papers were accepted after being peer reviewed by 1-2 reviewers per paper.
Paper | Justin Svegliato and Shlomo Zilberstein Adaptive Metareasoning for Bounded Rational Agents | |
Paper | Adam Liška, Germán Kruszewski and Marco Baroni Memorize or generalize? Searching for a compositional RNN in a haystack | |
Paper | Nicolas Bougie and Ryutaro Ichise Rule-based Reinforcement Learning augmented by External Knowledge | |
Paper | Nader Chmait Using propensity score matching for bias-reduction in the comparison of performance between AI agents | |
Paper | Enrique Fernández-Macías, Emilia Gómez, José Hernández-Orallo, Bao Sheng Loe, Bertin Martens, Fernando Martínez-Plumed and Songül Tolan A multidisciplinary task-based perspective for evaluating the impact of AI autonomy and generality on the future of work | |
Paper | Claes Strannegård, Wen Xu and Niklas Engsner Evolution and Learning in Generic Animats | |
Paper | Patrick Hammer Data Mining by Non-Axiomatic Reasoning | |
Paper | Selmer Bringsjord, Naveen Sundar Govindarajulu, Atriya Sen, Matthew Peveler, Biplav Srivastava and Kartik Talamadupula Tentacular Artificial Intelligence, and the Architecture Thereof, Introduced | |
Paper | Jordi Bieger and Kristinn R. Thórisson Requirements for General Intelligence: A Case Study in Trustworthy Cumulative Learning for Air Traffic Control | |
Paper | Jisha Maniamma and Hiroaki Wagatsuma Human Abduction for Solving Puzzles to Find Logically Explicable Rules to Discriminate Two Picture Groups Ostracized Each Other: An Ontology-based Model | |
Paper | Eray Özkural The Foundations of Deep Learning with a Path Towards General Intelligence | |
Paper | Eray Özkural Omega: An Architecture for AI Unification | |
Paper | Kristinn R. Thórisson and Arthur Talbot Abduction, Deduction & Causal-Relational Models |
University of Electro-Communications, Japan
Okinawa Institute for Science and Technology, Japan
National Institute of Advanced Industrial Science and Technology, Japan
Kyushu Institute of Technology, Japan
Ritsumeikan University, Japan
University of Tokyo & Dwango AI Lab, Japan
Reykjavik University & Icelandic Institute for Intelligent Machines, Iceland
Temple University, U.S.
Chalmers University of Technology & University of Gothenburg, Sweden
University of Palermo, Italy
University of Hertfordshire, U.K.
Delft University of Technology, The Netherlands &
Reykjavik University, Iceland
Polytechnic University of Valencia, Spain
Centre for the Study of Existential Risk, U.K.
Victoria University, Australia
Polytechnic University of Valencia, Spain
Centre for the Study of Existential Risk, U.K.
Eizo Akiyama | Tsukuba University, Japan |
---|---|
Joscha Bach | Harvard University, U.S. |
Marco Baroni | Facebook AI Research, U.S. |
Tarek Richard Besold | City University of London, U.K. |
Jordi Bieger | Delft University of Technology, The Netherlands & Reykjavik University, Iceland |
Selmer Bringsjord | Rensselaer Polytechnic Institute, U.S. |
Miles Brundage | University of Oxford, U.K. |
Lola Cañamero | University of Hertfordshire, U.K. |
Antonio Chella | University of Palermo, Italy |
Virginia Dignum | Delft University of Technology, The Netherlands |
Haris Dindo | Yewno & University of Palermo, Italy |
David Dowe | Monash University, Australia |
Kenji Doya | Okinawa Institute of Science and Technology, Japan |
Emmanuel Dupoux | EHESS, France |
Jan Feyereisl | AI Roadmap Institute & GoodAI, Czech Republic |
Patrick Hammer | Temple University, U.S. |
Helgi P. Helgason | Activity Stream, Iceland |
Bernhard Hengst | University of New South Wales, Australia |
Sean Holden | University of Cambride, U.K. |
Hidenori Kawamura | Hokkaido University, Japan |
David Kremelberg | Icelandic Institute for Intelligent Machines, Iceland |
Satoshi Kurihara | Keio University, Japan |
Othalia Larue | Wright State University, U.S. |
Ramon Lopez de Mantaras | AI Research Institute or the Spanish National Research Council, Spain |
Richard Mallah | Future of Life Institute, U.S. |
Tomas Mikolov | Facebook AI Research, U.S. |
Itsuki Noda | National Institute of Advanced Industrial Science and Technology, Japan |
Frans A. Oliehoek | University Of Liverpool, U.K. |
Satoshi Ono | Kagoshima University, Japan |
Laurent Orseau | DeepMind, U.K. |
Ricardo B.C. Prudencio | Federal University of Pernambuco, Brazil |
Gavin Rens | University of Cape Town, South Africa |
Hiroyuki Sato | University of Electro-Communications, Japan |
Ute Schmid | Universität Bamberg, Germany |
Murray Shanahan | Imperial College London & DeepMind, U.K. |
Carles Sierra | IIIA-CSIC, Spain & UT Sydney, Australia |
Jim Spohrer | IBM Research, U.S. |
Bas Steunebrink | NNAISENSE, Switzerland |
Claes Strannegård | Chalmers University of Technology & University of Gothenburg, Sweden |
Reiji Suzuki | Nagoya University, Japan |
Tadahiro Taniguchi | Ritsumeikan University, Japan |
Kristinn R. Thórisson | Reykjavik University & Icelandic Institute for Intelligent Machines, Iceland |
Hiroaki Wagatsuma | Kyushu Institute of Technology, Japan |
Pei Wang | Temple University, U.S. |
Hiroshi Yamakawa | Dwango Artificial Intelligence Laboratory, Japan |
Masahito Yamamoto | Hokkaido University, Japan |
Papers should be between 2 and 12 pages (excluding references) and describe the authors' original work in full (no extended abstracts). Formatting Guidelines, LaTeX Styles and MS Word Template can be downloaded from here. Papers will be subjected to peer-review and can be accepted for oral presentation and/or poster presentation. For papers that have previously been submitted to IJCAI and rejected, we ask authors to append the reviews and their responses to aid our review process.
Proposals for Demonstrations should be accompanied with a 2-page description for inclusion in the workshop's pre-proceedings. Examples include, but are not limited to: (interactively) demonstrating new tests or benchmarks, or the performance of a robot, (cognitive) architecture or design methodology.
Oral presentations should be given by one of the authors during one of the Contributed Talks Sessions.
Accepted papers will be gathered into a volume of pre-proceedings and published on this website before the workshop. We are looking into the possibility of producing a special issue for an archival journal.