A light implementation of a 3d convolutional neural network for online gesture classification

Detalhes bibliográficos
Ano de defesa: 2019
Autor(a) principal: Baldissera, Fábio Brandolt
Orientador(a): Não Informado pela instituição
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: eng
Instituição de defesa: Pontifícia Universidade Católica do Rio Grande do Sul
Escola Politécnica
Brasil
PUCRS
Programa de Pós-Graduação em Engenharia Elétrica
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://tede2.pucrs.br/tede2/handle/tede/10026
Resumo: With the advancement of machine learning techniques and the increased accessibility to computing power, Artificial Neural Networks (ANNs) have achieved state-of-the-art results in image classification and, most recently, in video classification. The possibility of gesture recognition from a video source enables a more natural non-contact human-machine interaction, immersion when interacting in virtual reality environments and can even lead to sign language translation in the near future. However, the techniques utilized in video classification are usually computationally expensive, being prohibitive to conventional hardware. This work aims to study and analyze the applicability of continuous online gesture recognition techniques for embedded systems. This goal is achieved by proposing a new model based on 2D and 3D CNNs able to perform online gesture recognition, i.e. yielding a label while the video frames are still being processed, in a predictive manner, before having access to future frames of the video. This technique is of paramount interest to applications in which the video is being acquired concomitantly to the classification process and the issuing of the labels has a strict deadline. The proposed model was tested against three representative gesture datasets found in the literature. The obtained results suggest the proposed technique improves the state-of-the-art by yielding a quick gesture recognition process while presenting a high accuracy, which is fundamental for the applicability of embedded systems.
id P_RS_4e0c8a8c958c24201e77fa860a115fda
oai_identifier_str oai:tede2.pucrs.br:tede/10026
network_acronym_str P_RS
network_name_str Biblioteca Digital de Teses e Dissertações da PUC_RS
repository_id_str
spelling A light implementation of a 3d convolutional neural network for online gesture classificationGesture RecognitionOnline ClassificationDCNNReconhecimento de GestosClassificação Online3DCNNENGENHARIASWith the advancement of machine learning techniques and the increased accessibility to computing power, Artificial Neural Networks (ANNs) have achieved state-of-the-art results in image classification and, most recently, in video classification. The possibility of gesture recognition from a video source enables a more natural non-contact human-machine interaction, immersion when interacting in virtual reality environments and can even lead to sign language translation in the near future. However, the techniques utilized in video classification are usually computationally expensive, being prohibitive to conventional hardware. This work aims to study and analyze the applicability of continuous online gesture recognition techniques for embedded systems. This goal is achieved by proposing a new model based on 2D and 3D CNNs able to perform online gesture recognition, i.e. yielding a label while the video frames are still being processed, in a predictive manner, before having access to future frames of the video. This technique is of paramount interest to applications in which the video is being acquired concomitantly to the classification process and the issuing of the labels has a strict deadline. The proposed model was tested against three representative gesture datasets found in the literature. The obtained results suggest the proposed technique improves the state-of-the-art by yielding a quick gesture recognition process while presenting a high accuracy, which is fundamental for the applicability of embedded systems.Com os avanços de técnicas de aprendizado de máquinas e o aumento da capacidade computacional disponível, redes neurais artificiais (ANNs) representam o estado-da-arte na tarefa de classificação de imagem, e mais recentemente na classificação de vídeos. A possibilidade do reconhecimento de gestos através de imagens de vídeo permite uma interface homem-máquina mais natural, maior imersão ao interagir com equipamentos de realidade virtual e pode até nos levar, em um futuro breve, à transcrição automática de linguagem de sinais. No entanto, as técnicas utilizadas para classificação de vídeo possuem um alto custo computacional, se tornando proibitivas para o uso em hardware mais simples. Esta dissertação busca estudar e analisar a aplicabilidade de técnicas de classificação de gestos contínua para sistemas embarcados. Este objetivo é atingido através da proposição de um modelo de rede neural baseado em redes de convolução 2D e 3D, capaz de realizar reconhecimento de gestos de forma online, isto é, gerando uma predição de classe para o vídeo concomitantemente com a obtenção dos quadros são obtidos, de uma forma preditiva, sem ter acesso a todos os quadros do vídeo. O modelo proposto foi testado em três diferentes bancos de dados de gestos presentes na literatura. Os resultados obtidos expandem o estado-da-arte por apresentar uma técnica de leve implementação que ainda apresenta uma acurácia alta suficiente para a aplicação em sistemas embarcados.Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPESPontifícia Universidade Católica do Rio Grande do SulEscola PolitécnicaBrasilPUCRSPrograma de Pós-Graduação em Engenharia ElétricaVargas, Fabian Luishttp://lattes.cnpq.br/9050311050537919Baldissera, Fábio Brandolt2021-12-20T19:53:24Z2019-10-31info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/masterThesisapplication/pdfhttp://tede2.pucrs.br/tede2/handle/tede/10026enginfo:eu-repo/semantics/openAccessreponame:Biblioteca Digital de Teses e Dissertações da PUC_RSinstname:Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS)instacron:PUC_RS2021-12-20T22:00:31Zoai:tede2.pucrs.br:tede/10026Biblioteca Digital de Teses e Dissertaçõeshttp://tede2.pucrs.br/tede2/PRIhttps://tede2.pucrs.br/oai/requestbiblioteca.central@pucrs.br||opendoar:2021-12-20T22:00:31Biblioteca Digital de Teses e Dissertações da PUC_RS - Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS)false
dc.title.none.fl_str_mv A light implementation of a 3d convolutional neural network for online gesture classification
title A light implementation of a 3d convolutional neural network for online gesture classification
spellingShingle A light implementation of a 3d convolutional neural network for online gesture classification
Baldissera, Fábio Brandolt
Gesture Recognition
Online Classification
DCNN
Reconhecimento de Gestos
Classificação Online
3DCNN
ENGENHARIAS
title_short A light implementation of a 3d convolutional neural network for online gesture classification
title_full A light implementation of a 3d convolutional neural network for online gesture classification
title_fullStr A light implementation of a 3d convolutional neural network for online gesture classification
title_full_unstemmed A light implementation of a 3d convolutional neural network for online gesture classification
title_sort A light implementation of a 3d convolutional neural network for online gesture classification
author Baldissera, Fábio Brandolt
author_facet Baldissera, Fábio Brandolt
author_role author
dc.contributor.none.fl_str_mv Vargas, Fabian Luis
http://lattes.cnpq.br/9050311050537919
dc.contributor.author.fl_str_mv Baldissera, Fábio Brandolt
dc.subject.por.fl_str_mv Gesture Recognition
Online Classification
DCNN
Reconhecimento de Gestos
Classificação Online
3DCNN
ENGENHARIAS
topic Gesture Recognition
Online Classification
DCNN
Reconhecimento de Gestos
Classificação Online
3DCNN
ENGENHARIAS
description With the advancement of machine learning techniques and the increased accessibility to computing power, Artificial Neural Networks (ANNs) have achieved state-of-the-art results in image classification and, most recently, in video classification. The possibility of gesture recognition from a video source enables a more natural non-contact human-machine interaction, immersion when interacting in virtual reality environments and can even lead to sign language translation in the near future. However, the techniques utilized in video classification are usually computationally expensive, being prohibitive to conventional hardware. This work aims to study and analyze the applicability of continuous online gesture recognition techniques for embedded systems. This goal is achieved by proposing a new model based on 2D and 3D CNNs able to perform online gesture recognition, i.e. yielding a label while the video frames are still being processed, in a predictive manner, before having access to future frames of the video. This technique is of paramount interest to applications in which the video is being acquired concomitantly to the classification process and the issuing of the labels has a strict deadline. The proposed model was tested against three representative gesture datasets found in the literature. The obtained results suggest the proposed technique improves the state-of-the-art by yielding a quick gesture recognition process while presenting a high accuracy, which is fundamental for the applicability of embedded systems.
publishDate 2019
dc.date.none.fl_str_mv 2019-10-31
2021-12-20T19:53:24Z
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/masterThesis
format masterThesis
status_str publishedVersion
dc.identifier.uri.fl_str_mv http://tede2.pucrs.br/tede2/handle/tede/10026
url http://tede2.pucrs.br/tede2/handle/tede/10026
dc.language.iso.fl_str_mv eng
language eng
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.format.none.fl_str_mv application/pdf
dc.publisher.none.fl_str_mv Pontifícia Universidade Católica do Rio Grande do Sul
Escola Politécnica
Brasil
PUCRS
Programa de Pós-Graduação em Engenharia Elétrica
publisher.none.fl_str_mv Pontifícia Universidade Católica do Rio Grande do Sul
Escola Politécnica
Brasil
PUCRS
Programa de Pós-Graduação em Engenharia Elétrica
dc.source.none.fl_str_mv reponame:Biblioteca Digital de Teses e Dissertações da PUC_RS
instname:Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS)
instacron:PUC_RS
instname_str Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS)
instacron_str PUC_RS
institution PUC_RS
reponame_str Biblioteca Digital de Teses e Dissertações da PUC_RS
collection Biblioteca Digital de Teses e Dissertações da PUC_RS
repository.name.fl_str_mv Biblioteca Digital de Teses e Dissertações da PUC_RS - Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS)
repository.mail.fl_str_mv biblioteca.central@pucrs.br||
_version_ 1850041306811727872