Current page: Information->Indexed and Annotated Bibliography
 
ECVision indexed and annotated bibliography of cognitive computer vision publications
This bibliography was created by Hilary Buxton and Benoit Gaillard, University of Sussex, as part of ECVision Specific Action 8-1
The complete text version of this BibTeX file is available here: ECVision_bibliography.bib


I. Guyon and P. S. P. Wang
Special Issue on Advances in Pattern Recognition Using Neural Networks

ABSTRACT

``Neural Networks'' are learning systems inspired by very simplified models of the brain. Often implemented in software, they are used for Signal Processing and Artificial Intelligence tasks. In the past few years, Neural Network techniques have been very proficient in Pattern Recognition. But, although recognizing patterns is a seemingly simple task that even young children perform with ease, it is still challenging for machines, including Neural Networks. The commonplace rationale behind using Neural Networks is that a machine which architecture imitates that of the brain should inherit its remarquable intelligence. This logic usually contrasts with the reality of the performance of Neural Networks. In this book, however, the authors have kept some distance with the biological foundations of Neural Networks. The success of their applications relies, to a large extend, on careful engineering. For instance, many novel aspects of the works presented here are concerned with combining Neural Networks with other ``non neural'' modules. Few papers in this book are introductory. The papers cover a wide variety of applications in Pattern Recognition, including Speech, Optical Character Recognition and Signature Verification, Vision and Language. Feed forward networks trained with the Back-Propagation algorithm are by far the most popular networks, but some papers use also Radial Basis Functions or related methods and others use Recurrent Networks. There may be two common key words for all the papers: structure and prior knowledge. The idea of using a fully connected ``back-prop-net'' on top of the raw data and hope for the best is no longer fashionable. Different ways of improving performance by making efficient use of the designer's prior knowledge are investigated. The authors generally use pre- and post-processor modules which incorporate structural knowledge about the task. Of particular interest are the use of pre-processors which enforce known invariances about the data, such as translation, rotation, etc. and Graph Algorithmic post-processors, including HMM's (Hidden Markov Models) which permit addressing the segmentation problem. A complementary way of incorporating prior knowledge is to constrain the structure of the Neural Network itself. The most widely used constrained networks are convolutional networks, which one dimensional version is known as TDNN (Time Delay Neural Network) and which two dimensional version is derived from the structure of the Neocognitron. The TWN (Time Warping Network) is another kind of constrained network using elastic matching units, and which turns out to be a powerful extension of classical HMM's. Finally, super-structures are introduced in the form of multiple expert systems, whereby several Neural Networks specialized to solve subtasks are used jointly to perform the final decision. We thank the authors and the reviewers for their efforts to contribute high quality papers and many useful references.


Site generated on Friday, 06 January 2006