FIPA96/06/04 18:22
FOUNDATION FOR INTELLIGENT PHYSICAL AGENTS nyws001
Source: H. Hexmoor (University of Buffalo)

Autonomous Intelligent Agency

Nontrivial autonomous agents aggregate and make abstractions from their sensory, non-sensory, and a-priori input and all the while interact with the world in a timely way and learn from their interactions. Several important aspects of evolution of successful interactions are (a) learning skills from knowledge (Automaticity), (b) habituation and skill refinement, (c) learning to coordinate and interface concurrent behaviors in order to adapt and increase performance, and (d) learning to aggregate sensory inputs or motor responses that lead to mental awareness, concept formation, and augmentation of ``meaning'' for prior concepts.

Recently, I formulated a theory that proposed processes to explain learning routine activities. This theory covered many of the aspects I listed above. Many of these principles are implemented in an architecture called GLAIR.

Grounded Layered Architecture with Integrated Reasoning (GLAIR) is an architecture I have developed in collaboration with Stuart Shapiro and Johan Lammens to model agents that function in the world. This architecture is used to demonstrate situated action as well as deliberative actions. Using my architecture I model agents that engage in routines, use them to guide their behavior, and learn more concepts having to do with acts. GLAIR architecture is characterized by a three-level organization into a Knowledge Level (KL), a Perceptuo-Motor Level (PML), and a

Sensori-Actuator Level (SAL). GLAIR is a general multi-level architecture for autonomous cognitive agents with integrated sensory and motor capabilities. GLAIR offers an ``unconscious'' layer for modeling tasks that exhibit a close affinity between sensing and acting, i.e., behavior based AI modules, and a ``conscious'' layer for modeling tasks that exhibit delays between sensing and acting. GLAIR provides learning mechanisms that allow for autonomous agents to learn emergent behaviors and add it to their repertoire of behaviors. GLAIR motivates the concept of embodiment. All GLAIR based agents display a variety of integrated behaviors. We distinguish between deliberative, reactive, and reflexive behaviors. Embodied representations at the PML facilitate this integration. As we move down from the KL to PML and SAL, computational and representational power is traded for better response time and simplicity of control. The agent learns from its interactions with the environment.

In this forum, I will present two aspects of my work: (a) design principles in modeling the cognitive and the sub-cognitive processes involved in perceptual and motor learning, and (b) knowledge/programming representational requirements within an intelligent agency. I will characterize various forms of learning and evolution in agents along processes instead of techniques. I will argue that we need to have a variety of representational frameworks in order to produce desired agent functionalities.

Henry Hexmoor

226 Bell Hall

Dept of CS, University of Buffalo

Buffalo, NY 14260