Face normalization using multi-scale cortical keypoints

Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. Models of visual perception are based on image representations in cortical area V1 and beyond, which contain many cell layers for feature extractions. Simple, complex and end-...

ver descrição completa

Detalhes bibliográficos
Autor principal: Cunha, João (author)
Outros Autores: Rodrigues, J. M. F. (author), du Buf, J. M. H. (author)
Formato: article
Idioma:eng
Publicado em: 2009
Assuntos:
Texto completo:http://hdl.handle.net/10400.1/111
País:Portugal
Oai:oai:sapientia.ualg.pt:10400.1/111
Descrição
Resumo:Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. Models of visual perception are based on image representations in cortical area V1 and beyond, which contain many cell layers for feature extractions. Simple, complex and end-stopped cells tuned to different spatial frequencies (scales) and/or orientations provide input for line, edge and keypoint detection. This yields a rich, multi-scale object representation that can be stored in memory in order to identify objects. The multi-scale, keypoint-based saliency maps for Focus-of-Attention can be explored to obtain face detection and normalization, after which face recognition can be achieved using the line/edge representation. In this paper, we focus only on face normalization, showing that multi-scale keypoints can be used to construct canonical representations of faces in memory.