Alcides X. Benicasa, Marcos G. Quiles, Liang Zhao, Roseli A. F. Romero.
Research in qualitative models of visual attention has mainly focused on the bottom-up guidance of early visual features. Here we propose a new model which combine both bottom-up and top-down modulation into the visual selection model. The proposed model is composed of five main components: a Visual Feature Extraction module, a LEGION network for image segmentation to deal with objects, a Multi-Layer Perceptron (MLP) network for object recognition, a Kohonen's Self-Organizing Maps (SOM) combined with a network of integrate and fire neurons which creates our attribute-saliency map, and finally, and object selection module which highlights the most salient object in the scene. Experiments with synthetic and real images is conducted and the results demonstrate the effectiveness of the proposed approach.
http://www.lbd.dcc.ufmg.br/colecoes/sbrn/2012/0041.pdfCaso o link acima esteja inválido, faça uma busca pelo texto completo na Web: Buscar na Web