FIPA96/06/11 13:10
FOUNDATION FOR INTELLIGENT PHYSICAL AGENTS nyws008
Source: François Arlabosse, Jean Sallantin

Intelligent Agents : The measure of their intelligence

Introduction

Intelligent Agents are going to be spread in a variety of technological artefacts and will execute tasks what will normally require a level of intelligence attributed to human. In a new emerging world of industrial activities where this intelligence will be distributed the question of the reliability of the source of the intelligence will be mandatory in order to validate or to certify that the results produced are not insane. In critical missions, or even in the layman life this measure of the embedded intelligence will be required for the acceptability of the systems built on intelligent agents.

An architecture for modeling and validation.

The efforts to built robots have shown sufficient similarity to warrant the suggestion that they comprise the emergence of a new paradigm in robot architecture.

We adopt the convention that an intelligent autonomous system is referred as a robot.

Quite often it is supposed that there is a strong correlation between the increase of the autonomy allocatable to a technical system and the increase of the level of the physical intelligence of the system. In fact the complexity of the task to be accomplished and the environmental uncertainties are influencing strongly this relation which is not always true. Modeling and testing is an important issue for robotics as robotic systems become more and more sophisticated, integrating embedded modules with their own physical resources and control software. A generic methodology should cover every step of physical intelligent agent design from specifications to test, including mission assignment as well as software and hardware issues. The notion of autonomy, applied to robotics, is the ability of a given robot to perform the mission for which it is designed, regarding the constraints inherent to the mission environment. When this environment is dynamic and partially unknown, autonomy is related to intelligence, as is the ability of an agent to deal with an unknown environment in order to fulfill its goals. Our position about testing the autonomy of any robot is that it can be regarded as a kind of Turing test. Hence our architecture should allow us to specify a Turing test adapted to a given robot. Reminding the basis of the Turing test is : A human operator communicates with an other agent through a terminal. The operator must find out it the other agent is a human being or a machine. The machine is considered intelligent if the operator cannot discriminates whether he is communicating with a machine or not. This test induces two implicit statements:

Conclusion

Testing complex systems is a crucial issue in fields like mobile robotics, where validation relies on the notion of autonomy, the ability for a robot to safely evolve and fulfill its mission within a given environment. There is a need for a generic methodology, to handle a system's life-cycle from its requirements in term of mission to test and revision. Our aim is to specify a platform, featuring a methodology and software tools, that are able to define guidelines for modeling, simulating and validating any physical intelligent agent.

Francois Arlabosse Jean Sallantin
<farlabos@framentec.fr> <js@lirmm.fr>
Framentec-Cognitech LIRMM
Tour FRAMATOME
1,Place de la Coupole 161, rue Ada
922084 PARIS-LA-DEFENSE 34392 Montpellier Cedex 5
FRANCEFRANCE