NLOOK: a computational attention model for robot vision

Milton Roberto HeinenPaulo Martins Engel

The computational models of visual attention, originally proposed as cognitive models of human attention, nowadaysare being used as front-ends to some robotic vision systems, like automatic object recognition and landmark detection. However,these kinds of applications have different requirements from those originally proposed. More specifically, a robotic visionsystem must be relatively insensitive to 2D similarity transforms of the image, as in-plane translations, rotations, reflectionsand scales, and it should also select fixation points in scale as well as position. In this paper a new visual attention model,called NLOOK, is proposed. This model is validated through several experiments, which show that it is less sensitive to 2Dsimilarity transforms than other two well known and publicly available visual attention models: NVT and SAFE. Besides,NLOOK can select more accurate fixations than other attention models, and it can select the scales of fixations, too. Thus, theproposed model is a good tool to be used in robot vision systems.

Caso o link acima esteja inválido, faça uma busca pelo texto completo na Web: Buscar na Web

Biblioteca Digital Brasileira de Computação - Contato:
     Mantida por: