Automatic detection of small bowel tumors in capsule endoscopy based on color curvelet covariance statistical texture descriptors

Traditional endoscopic methods do not allow the visualization of the entire Gastrointestinal (GI) tract. Wireless Capsule Endoscopy (CE) is a diagnostic procedure that overcomes this limitation of the traditional endoscopic methods. The CE video frames possess rich information about the condition of...

ver descrição completa

Detalhes bibliográficos
Autor principal: Barbosa, Daniel (author)
Outros Autores: Correia, J. H. (author), Ramos, Jaime (author), Lima, C. S. (author)
Formato: conferencePaper
Idioma:eng
Publicado em: 2009
Assuntos:
Texto completo:http://hdl.handle.net/1822/17514
País:Portugal
Oai:oai:repositorium.sdum.uminho.pt:1822/17514
Descrição
Resumo:Traditional endoscopic methods do not allow the visualization of the entire Gastrointestinal (GI) tract. Wireless Capsule Endoscopy (CE) is a diagnostic procedure that overcomes this limitation of the traditional endoscopic methods. The CE video frames possess rich information about the condition of the stomach and intestine mucosa, encoded as color and texture patterns. It is known for a long time that human perception of texture is based in a multi-scale analysis of patterns, which can be modeled by multi-resolution approaches. Furthermore, modeling the covariance of textural descriptors has been successfully used in classification of colonoscopy videos. Therefore, in the present paper it is proposed a frame classification scheme based on statistical textural descriptors taken from the Discrete Curvelet Transform (DCT) domain, a recent multi-resolution mathematical tool. The DCT is based on an anisotropic notion of scale and high directional sensitivity in multiple directions, being therefore suited to characterization of complex patterns as texture. The covariance of texture descriptors taken at a given detail level, in different angles, is used as classification feature, in a scheme designated as Color Curvelet Covariance. The classification step is performed by a multilayer perceptron neural network. The proposed method has been applied in real data taken from several capsule endoscopic exams and reaches 97.2% of sensitivity and 97.4% specificity. These promising results support the feasibility of the proposed method.