Building the what and where systems: multi-scale lines, edges and keypoints

Computer vision for realtime applications requires tremendous computational power because all images must be processed from the first to the last pixel. Ac tive vision by probing specific objects on the basis of already acquired context may lead to a significant reduction of processing. This idea is...

Full description

Bibliographic Details
Main Author: Rodrigues, J. M. F. (author)
Other Authors: Almeida, D. (author), Nunes, S. (author), Lam, Roberto (author), du Buf, J. M. H. (author)
Format: article
Language:eng
Published: 2009
Subjects:
Online Access:http://hdl.handle.net/10400.1/162
Country:Portugal
Oai:oai:sapientia.ualg.pt:10400.1/162
Description
Summary:Computer vision for realtime applications requires tremendous computational power because all images must be processed from the first to the last pixel. Ac tive vision by probing specific objects on the basis of already acquired context may lead to a significant reduction of processing. This idea is based on a few concepts from our visual cortex (Rensink, Visual Cogn. 7, 17-42, 2000): (1) our physical surround can be seen as memory, i.e. there is no need to construct detailed and complete maps, (2) the bandwidth of the what and where systems is limited, i.e. only one object can be probed at any time, and (3) bottom-up, low-level feature extraction is complemented by top-down hypothesis testing, i.e. there is a rapid convergence of activities in dendritic/axonal connections.