Neural and Symbolic AI - mind the gap! Aligning Artificial Neural Networks and Ontologies

Artificial neural networks have been the key to solve a variety of different problems. However, neural network models are still essentially regarded as black boxes, since they do not provide any human-interpretable evidence as to why they output a certain re sult. In this dissertation, we address th...

ver descrição completa

Detalhes bibliográficos
Autor principal: Ribeiro, Manuel António de Melo Chinopa de Sousa (author)
Formato: masterThesis
Idioma:eng
Publicado em: 2021
Assuntos:
Texto completo:http://hdl.handle.net/10362/113651
País:Portugal
Oai:oai:run.unl.pt:10362/113651
Descrição
Resumo:Artificial neural networks have been the key to solve a variety of different problems. However, neural network models are still essentially regarded as black boxes, since they do not provide any human-interpretable evidence as to why they output a certain re sult. In this dissertation, we address this issue by leveraging on ontologies and building small classifiers that map a neural network’s internal representations to concepts from an ontology, enabling the generation of symbolic justifications for the output of neural networks. Using two image classification problems as testing ground, we discuss how to map the internal representations of a neural network to the concepts of an ontology, exam ine whether the results obtained by the established mappings match our understanding of the mapped concepts, and analyze the justifications obtained through this method.