Robotic technology is advancing apace and now a top team of European scientists and engineers hope to make the leap from single function ‘dumb’ machines to adaptive learning machines.
The concept of a cognitive robotic companion inspires some of the best science fiction but one day may be science fact following the work of the four-year COGNIRON project funded since January 2004 by the IST’s Future and Emerging Technologies initiative. But what could a cognitive robot companion do?
“Well, that’s a difficult question. The example that’s often used is a robot that’s able to fulfil your needs, like passing you a drink or helping in everyday tasks,” says Dr Raja Chatila, research director at the Systems Architecture and Analysis Laboratory of the French Centre National de la Recherche Scientifique (LAAS-CNRS), and COGNIRON project coordinator.
“That might seem a bit trivial, but let me ask you a question: In the 1970s, what was the use of a personal computer?” he asks.
It’s a good point. In fact, it was then impossible to imagine how PCs would change the world’s economics, politics and society in just 30 years. The eventual uses, once the technology developed, were far from trivial.
COGNIRON set out on the same principle, given that society is constantly evolving, and the project partners hope to tackle some of the key issues that need to be resolved for the development of a cognitive robot companion, which could be used as assistants for disabled and elderly people or the general population. Who wouldn’t like, for instance, their breakfast ready when they awoke, deliveries accepted while they were at work and their apartment cleaned upon their return?
Developing intelligent behaviour
The key issue governing these tasks is intelligence and developing intelligent behaviour on a number of fronts, the corner stone and main work of COGNIRON.
Organised around seven key research themes, the project studies multimodal dialogues, detection and understanding of human activity, social behaviour and embodied interaction, skill and task learning, spatial cognition and multimodal situation awareness, as well as intentionality and initiative. Finally, the seventh research theme, systems levels integration and evaluation, focuses on integrating all the other themes into a cohesive, cogitating whole.
Dr Chatila summarises the purpose of the seven themes. “Research breaks down into four capacities required by a cognitive robot companion: perception and cognition of environment; learning by observation; decision making; communication and interaction with humans.”
Decision-making is a fundamental capability of a cognitive robot whether it’s for autonomous deliberation, task execution, or for human-robot collaborative problem solving. It also integrates the three other capacities: interaction, learning and understanding the environment.
“Getting a robot to move around a human, without hurting them, and while making them feel comfortable, is a vital task,” says Dr Chatila.
To work, it means a robot must pick up subtle cues. If, for instance, a human leans forward to get up, the robot needs to understand the purpose of that movement. What’s more, much of human communication is non-verbal, and such cognitive machines need to pick up on that if they are to be useful, rather than irritating.
Even in verbal communication there are many habits robots need to acquire that are so second nature to humans that we never think of them. “For example, turn taking in conversation. Humans take turns to [talk], we need to find a way to make robots do the same,” says Dr Chatila. A robot that keeps interrupting would get on an owner’s nerves.
To tackle the problems, the researchers took inspiration from natural cognition as it occurs in humans, which is one reason why a cognitive robot companion needs to be able to learn.
Take perception. In machines, the environment is usually represented in a geometric model, which is excellent for a quantitative snapshot of an area, like an architect’s blueprint. But humans don’t perceive their environment that way; they use a topological model, which provides a more qualitative representation of reality. “An architect’s blueprint might tell you the dimensions, but a topological model will tell you the function of elements, like a door, a corridor, or the nature and use of a given room, for example.”
Three key experiments
Despite its highly ambitious aims the project made enormous progress and the team feel confident they will meet their criteria for success: three concrete implementations, the so-called ‘Key Experiments’ being implemented on real robots for the integration, demonstration and validation of the research results.
One experiment will feature a robot building a model of its environment in the course of a home tour, another will feature a curious and proactive robot that will be able to infer that a human needs something to be done, while the third one will demonstrate a robot’s ability to learn by imitation and repetition.
In fact, the project has already partially implemented all three experiments, eighteen months before the project ends. “The three experiments are an expression of our achievement in research and integration,” says Dr Chatila.
He emphasises that this is a promising start, but it will be a very long road before a fully functional Cognitive Robot Companion will be realised and potentially commercialised. COGNIRON will advance the state-of-the-art and understanding of the different components required but will not yet allow a fully integrated robot endowed with all the required capacities to be built.