On the Evaluation of Energy-Efficient Deep Learning Using Stacked Autoencoders on Mobile GPUs

Over the last years, deep learning architectures have gained attention by winning important international detection and classification challenges. However, due to high levels of energy consumption, the need to use low-power devices at acceptable throughput performance is higher than ever. This paper...

ver descrição completa

Detalhes bibliográficos
Autor principal: Falcao, Gabriel (author)
Outros Autores: Alexandre, Luís (author), Marques, J. (author), Frazão, Xavier (author), Maria, J. (author)
Formato: conferenceObject
Idioma:eng
Publicado em: 2020
Assuntos:
Texto completo:http://hdl.handle.net/10400.6/8152
País:Portugal
Oai:oai:ubibliorum.ubi.pt:10400.6/8152
Descrição
Resumo:Over the last years, deep learning architectures have gained attention by winning important international detection and classification challenges. However, due to high levels of energy consumption, the need to use low-power devices at acceptable throughput performance is higher than ever. This paper tries to solve this problem by introducing energy efficient deep learning based on local training and using low-power mobile GPU parallel architectures, all conveniently supported by the same high-level description of the deep network. Also, it proposes to discover the maximum dimensions that a particular type of deep learning architecture—the stacked autoencoder—can support by finding the hardware limitations of a representative group of mobile GPUs and platforms.