Distributed Intelligence for Smart Assistive Appliances

A Cognition Briefing

Contributed by: Cecilio Angulo and Ricardo Téllez, Technical University of Catalonia

A main goal for researchers designing smart assistive appliances is to develop strategies allowing both early detection and avoidance of problems that could lead to decreased independence. Smart assistive systems deal with sensors and actuators that monitor the users, communicate with each other, and intelligently support them in their daily activities. It is predicted by market analysts that this topic will be a major growth area over the next decade; for example, people aged 65 and older are the fastest growing segment of the population in developed countries and over 20% of people 85 and over have a limited capacity for independent living, with the result that they require continuous monitoring and daily care. The creation of secure, unobtrusive, adaptable environments for monitoring health and encouraging healthy behaviours will be vital to the delivery of assistance in the future. Medical analysis, smart sensors, intelligent software agents, distributed control, wireless communication and internet resources are active research areas within this field.

For continuous monitoring, unobtrusive and inexpensive sensors must be deployed. Sensors are inherently noisy and unreliable, robustness being traditionally achieved by the deployment of a large number of them. A more flexible and adaptable approach towards smart assistive systems is to integrate models and algorithms of computational intelligence with the processing of sensor data in order to link both with human behaviours: sensor data feeds into computational intelligence techniques; cooperative communication between units is implemented through a wireless network; and internet resources allow the linking of the whole assistive net with external services. Multi-agent systems and architectures based on smart devices have recently been explored as monitoring health care systems. Although some useful results are provided from these approaches and interesting lessons have been learnt about reliability and scalability of the architecture, most desirable features, such as adaptability and learning from the user, have not yet been implementted. Dedicated distributed architectures for this kind of smart sensor nodes pose challenges to current research.

In this sense, novel studies are emerging concerned with the development of appropriated smart architectures, expanding from the conversion of passive sensors into smart devices by adding on-line processing capabilities and using them like plug-and-play mechanisms. The proposed approach considers all these devices from a modular perspective and integrates them on a multi-?physical agents? cognitive system environment: sensor and actuator physical devices are the key to obtain adaptability, interfacing between the external real world of the user and the internal machine world representation of the agents. Physical agents or ?intelligent hardware units? (IHU) are created by embedding flexible computing techniques into the sensors and the actuators, and communication abilities to share information. An IHU is physically composed by a hardware device, or a small number of them sensing (actuating on) the ?external world?, and an associated micro-controller generating decisions from available data and sharing information with other units, connecting it with the ?internal world?. The micro-controller is the part of IHU processing information and communication. An external computer could manage the general goal task to be realized and it would insert reliability over all the system.

Groups of these devices should have enough collective awareness to function autonomously based on sensor data. The most important aim of this vision is to design a collaborative computing structure that merges the intelligent hardware units with the needed information processing in order to generate a friendly operating scene through appropriate user interfaces. During the training phase, the information sent between physical agents is used to learn coordination. When training is finished, IHUs have learnt how to treat information about the state of other elements to collaborate with them, and they will use communication to maintain that coordination: decisions taken for the embedded IHUs and relevant data sensed from the external world are available all around the network in the form of internal representations through the communication network. Furthermore, communication gives reaction capabilities in advance of unexpected situations to the assistive network, including, where necessary, the shut down of some integrated devices.

The main objective is that a certain proportion of physical agents learn in an unsupervised manner what specific individual task they must implement with their device to contribute towards the common goal. Because of their simple structure, their task is primarily to adequately translate the signals (external world) sent by the hardware device connected to them, in such a form that shared translated information (internal or representational world) allows the goal task demanded for the user to be reached. Learning and adaptation derived from information processing allows the health network to improve its performance in a number of forms: (a) knowledge is inserted into the system (facts, behaviour, rules); (b) concepts are generalized from multiple instances; (c) information is more efficiently re-organized into the system; (d) new concepts are discovered or designed; (e) experience is used.