Object detection and pose estimation from natural features for augmented reality in complex scenes
Ano de defesa: | 2016 |
---|---|
Autor(a) principal: | |
Orientador(a): | |
Banca de defesa: | |
Tipo de documento: | Tese |
Tipo de acesso: | Acesso aberto |
Idioma: | eng |
Instituição de defesa: |
Universidade Federal de Pernambuco
|
Programa de Pós-Graduação: |
Programa de Pos Graduacao em Ciencia da Computacao
|
Departamento: |
Não Informado pela instituição
|
País: |
Brasil
|
Palavras-chave em Português: | |
Link de acesso: | https://repositorio.ufpe.br/handle/123456789/22417 |
Resumo: | Alignment of virtual elements to the real world scenes (known as detection and tracking) relying on features that are naturally present on the scene is one of the most important challenges in Augmented Reality. When it goes to complex scenes like industrial scenarios, the problem gets bigger with the lack of features and models, high specularity and others. Based on these problems, this PhD thesis addresses the question “How to improve object detection and pose estimation from natural features for AR when dealing with complex scenes problems?”. In order to answer this question, we need to ask ourselves “What are the challenges that we face when developing a new tracker for real world scenarios?”. We begin to answer these questions by developing a complete tracking system that tackles some characteristics typically found in industrial scenarios. This system was validated in a tracking competition organized by the most important AR conference in the world, called ISMAR. During the contest, two complementary problems to tracking were also discussed: calibration, procedure which puts the virtual information in the same coordinate system of the real world, and 3D reconstruction, which is responsible for creating 3D models of the scene to be used for tracking. Because many trackers need a pre-acquired model of the target objects, the quality of the generated geometric model of the objects influences the tracker, as observed on the tracking contest. Sometimes these models are available but in other cases their acquisition represents a great effort (manually) or cost (laser scanning). Because of this we decided to analyze how difficult it is today to automatically recover 3D geometry from complex 3D scenes by using only video. In our case, we considered an electrical substation as a complex 3D scene. Based on the acquired knowledge from previous experiments, we decided to first tackle the problem of improving the tracking for scenes where we can use recent RGB-D sensors during model generation and tracking. We developed a technique called DARP, Depth Assisted Rectification of Patches, which can improve matching by using rectified features based on patches normals. We analyzed this new technique under different synthetic and real scenes and improved the results over traditional texture based trackers like ORB, DAFT or SIFT. Since model generation is a difficult problem in complex scenes, our second proposed tracking approach does not depend on these geometric models and aims to track texture or textureless objects. We applied a supervised learning technique, called Gradient Boosting Trees (GBTs) to solve the tracking as a linear regression problem. We developed this technique by using image gradients and analyzing their relationship with tracking parameters. We also proposed an improvement over GBTs by using traditional tracking approaches together with them, like intensity or edge based features which turned their piecewise constant function to a more robust piecewise linear function. With the new approach, it was possible to track textureless objects like a black and white map for example. |
id |
UFPE_8662c10aa10b3549dfe4994f851d487d |
---|---|
oai_identifier_str |
oai:repositorio.ufpe.br:123456789/22417 |
network_acronym_str |
UFPE |
network_name_str |
Repositório Institucional da UFPE |
repository_id_str |
|
spelling |
SIMOES, Francisco Paulo Magalhaeshttp://lattes.cnpq.br/4321649532287831http://lattes.cnpq.br/3355338790654065TEICHRIEB, Veronica2017-11-29T16:49:07Z2017-11-29T16:49:07Z2016-03-07https://repositorio.ufpe.br/handle/123456789/22417Alignment of virtual elements to the real world scenes (known as detection and tracking) relying on features that are naturally present on the scene is one of the most important challenges in Augmented Reality. When it goes to complex scenes like industrial scenarios, the problem gets bigger with the lack of features and models, high specularity and others. Based on these problems, this PhD thesis addresses the question “How to improve object detection and pose estimation from natural features for AR when dealing with complex scenes problems?”. In order to answer this question, we need to ask ourselves “What are the challenges that we face when developing a new tracker for real world scenarios?”. We begin to answer these questions by developing a complete tracking system that tackles some characteristics typically found in industrial scenarios. This system was validated in a tracking competition organized by the most important AR conference in the world, called ISMAR. During the contest, two complementary problems to tracking were also discussed: calibration, procedure which puts the virtual information in the same coordinate system of the real world, and 3D reconstruction, which is responsible for creating 3D models of the scene to be used for tracking. Because many trackers need a pre-acquired model of the target objects, the quality of the generated geometric model of the objects influences the tracker, as observed on the tracking contest. Sometimes these models are available but in other cases their acquisition represents a great effort (manually) or cost (laser scanning). Because of this we decided to analyze how difficult it is today to automatically recover 3D geometry from complex 3D scenes by using only video. In our case, we considered an electrical substation as a complex 3D scene. Based on the acquired knowledge from previous experiments, we decided to first tackle the problem of improving the tracking for scenes where we can use recent RGB-D sensors during model generation and tracking. We developed a technique called DARP, Depth Assisted Rectification of Patches, which can improve matching by using rectified features based on patches normals. We analyzed this new technique under different synthetic and real scenes and improved the results over traditional texture based trackers like ORB, DAFT or SIFT. Since model generation is a difficult problem in complex scenes, our second proposed tracking approach does not depend on these geometric models and aims to track texture or textureless objects. We applied a supervised learning technique, called Gradient Boosting Trees (GBTs) to solve the tracking as a linear regression problem. We developed this technique by using image gradients and analyzing their relationship with tracking parameters. We also proposed an improvement over GBTs by using traditional tracking approaches together with them, like intensity or edge based features which turned their piecewise constant function to a more robust piecewise linear function. With the new approach, it was possible to track textureless objects like a black and white map for example.CNPQO alinhamento de elementos virtuais com a cena real (definido como detecção e rastreamento) através de características naturalmente presentes em cena é um dos grandes desafios da Realidade Aumentada. Quando se trata de cenas complexas, como cenários industriais, o problema se torna maior com objetos pouco texturizados, alta especularidade e outros. Com base nesses problemas, esta tese de doutorado aborda a questão "Como melhorar a detecção de objetos e a estimativa da sua pose através de características naturais da cena para RA ao lidar com problemas de cenários complexos?". Para responder a essa pergunta, precisamos também nos perguntar: Quais são os desafios que enfrentamos ao desenvolver um novo rastreador para cenários reais?". Nesta tese, começamos a responder estas questões através da criação de um sistema de rastreamento completo que lida com algumas características tipicamente encontradas em cenários industriais. Este sistema foi validado em uma competição de rastreamento realizada na principal conferência de RA no mundo, chamada ISMAR. Durante a competição também foram discutidos dois problemas complementares ao rastreamento: a calibração, procedimento que coloca a informação virtual no mesmo sistema de coordenadas do mundo real, e a reconstrução 3D, responsável por criar modelos 3D da cena. Muitos rastreadores necessitam de modelos pré-adquiridos dos objetos presentes na cena e sua qualidade influencia o rastreador, como observado na competição de rastreamento. Às vezes, esses modelos estão disponíveis, mas em outros casos a sua aquisição representa um grande esforço (manual) ou custo (por varredura a laser). Devido a isto, decidimos analisar a dificuldade de reconstruir automaticamente a geometria de cenas 3D complexas usando apenas vídeo. No nosso caso, considerou-se uma subestação elétrica como exemplo de uma cena 3D complexa. Com base no conhecimento adquirido a partir das experiências anteriores, decidimos primeiro resolver o problema de melhorar o rastreamento para as cenas em que podemos utilizar sensores RGB-D durante a reconstrução e o rastreamento. Foi desenvolvida a técnica chamada DARP, sigla do inglês para Retificação de Patches Assistida por Informação de Profundidade, para melhorar o casamento de características usando patches retificados a partir das normais. A técnica foi analisada em cenários sintéticos e reais e melhorou resultados de rastreadores baseados em textura como ORB, DAFT ou SIFT. Já que a reconstrução do modelo 3D é um problema difícil em cenas complexas, a segunda abordagem de rastreamento não depende desses modelos geométricos e pretende rastrear objetos texturizados ou não. Nós aplicamos uma técnica de aprendizagem supervisionada, chamada Gradient Boosting Trees (GBTs) para tratar o rastreamento como um problema de regressão linear. A técnica foi desenvolvida utilizando gradientes da imagem e a análise de sua relação com os parâmetros de rastreamento. Foi também proposta uma melhoria em relação às GBTs através do uso de abordagens tradicionais de rastreamento em conjunto com a regressão linear, como rastreamento baseado em intensidade ou em arestas, propondo uma nova função de predição por partes lineares mais robusta que a função de predição por partes constantes. A nova abordagem permitiu o rastreamento de objetos não-texturizados como por exemplo um mapa em preto e branco.engUniversidade Federal de PernambucoPrograma de Pos Graduacao em Ciencia da ComputacaoUFPEBrasilAttribution-NonCommercial-NoDerivs 3.0 Brazilhttp://creativecommons.org/licenses/by-nc-nd/3.0/br/info:eu-repo/semantics/openAccessVisão computacionalRealidade aumentadaAugmented RealityComputer VisionIndustry ApplicationDepth Assisted RectificationLearning Based TrackingObject detection and pose estimation from natural features for augmented reality in complex scenesinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/doctoralThesisdoutoradoreponame:Repositório Institucional da UFPEinstname:Universidade Federal de Pernambuco (UFPE)instacron:UFPETHUMBNAILTeseFinal_fpms.pdf.jpgTeseFinal_fpms.pdf.jpgGenerated Thumbnailimage/jpeg1299https://repositorio.ufpe.br/bitstream/123456789/22417/5/TeseFinal_fpms.pdf.jpg6ccad0b4776c099eef68c273638842e0MD55ORIGINALTeseFinal_fpms.pdfTeseFinal_fpms.pdfapplication/pdf108609391https://repositorio.ufpe.br/bitstream/123456789/22417/1/TeseFinal_fpms.pdfc84c50e3c8588d6c85e44f9ac6343200MD51CC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8811https://repositorio.ufpe.br/bitstream/123456789/22417/2/license_rdfe39d27027a6cc9cb039ad269a5db8e34MD52LICENSElicense.txtlicense.txttext/plain; charset=utf-82311https://repositorio.ufpe.br/bitstream/123456789/22417/3/license.txt4b8a02c7f2818eaf00dcf2260dd5eb08MD53TEXTTeseFinal_fpms.pdf.txtTeseFinal_fpms.pdf.txtExtracted texttext/plain227946https://repositorio.ufpe.br/bitstream/123456789/22417/4/TeseFinal_fpms.pdf.txt3d4f47fba4a9e5c4b0ef749e74938ba0MD54123456789/224172019-10-25 07:37:15.634oai:repositorio.ufpe.br:123456789/22417TGljZW7Dp2EgZGUgRGlzdHJpYnVpw6fDo28gTsOjbyBFeGNsdXNpdmEKClRvZG8gZGVwb3NpdGFudGUgZGUgbWF0ZXJpYWwgbm8gUmVwb3NpdMOzcmlvIEluc3RpdHVjaW9uYWwgKFJJKSBkZXZlIGNvbmNlZGVyLCDDoCBVbml2ZXJzaWRhZGUgRmVkZXJhbCBkZSBQZXJuYW1idWNvIChVRlBFKSwgdW1hIExpY2Vuw6dhIGRlIERpc3RyaWJ1acOnw6NvIE7Do28gRXhjbHVzaXZhIHBhcmEgbWFudGVyIGUgdG9ybmFyIGFjZXNzw612ZWlzIG9zIHNldXMgZG9jdW1lbnRvcywgZW0gZm9ybWF0byBkaWdpdGFsLCBuZXN0ZSByZXBvc2l0w7NyaW8uCgpDb20gYSBjb25jZXNzw6NvIGRlc3RhIGxpY2Vuw6dhIG7Do28gZXhjbHVzaXZhLCBvIGRlcG9zaXRhbnRlIG1hbnTDqW0gdG9kb3Mgb3MgZGlyZWl0b3MgZGUgYXV0b3IuCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwoKTGljZW7Dp2EgZGUgRGlzdHJpYnVpw6fDo28gTsOjbyBFeGNsdXNpdmEKCkFvIGNvbmNvcmRhciBjb20gZXN0YSBsaWNlbsOnYSBlIGFjZWl0w6EtbGEsIHZvY8OqIChhdXRvciBvdSBkZXRlbnRvciBkb3MgZGlyZWl0b3MgYXV0b3JhaXMpOgoKYSkgRGVjbGFyYSBxdWUgY29uaGVjZSBhIHBvbMOtdGljYSBkZSBjb3B5cmlnaHQgZGEgZWRpdG9yYSBkbyBzZXUgZG9jdW1lbnRvOwpiKSBEZWNsYXJhIHF1ZSBjb25oZWNlIGUgYWNlaXRhIGFzIERpcmV0cml6ZXMgcGFyYSBvIFJlcG9zaXTDs3JpbyBJbnN0aXR1Y2lvbmFsIGRhIFVGUEU7CmMpIENvbmNlZGUgw6AgVUZQRSBvIGRpcmVpdG8gbsOjbyBleGNsdXNpdm8gZGUgYXJxdWl2YXIsIHJlcHJvZHV6aXIsIGNvbnZlcnRlciAoY29tbyBkZWZpbmlkbyBhIHNlZ3VpciksIGNvbXVuaWNhciBlL291IGRpc3RyaWJ1aXIsIG5vIFJJLCBvIGRvY3VtZW50byBlbnRyZWd1ZSAoaW5jbHVpbmRvIG8gcmVzdW1vL2Fic3RyYWN0KSBlbSBmb3JtYXRvIGRpZ2l0YWwgb3UgcG9yIG91dHJvIG1laW87CmQpIERlY2xhcmEgcXVlIGF1dG9yaXphIGEgVUZQRSBhIGFycXVpdmFyIG1haXMgZGUgdW1hIGPDs3BpYSBkZXN0ZSBkb2N1bWVudG8gZSBjb252ZXJ0w6otbG8sIHNlbSBhbHRlcmFyIG8gc2V1IGNvbnRlw7pkbywgcGFyYSBxdWFscXVlciBmb3JtYXRvIGRlIGZpY2hlaXJvLCBtZWlvIG91IHN1cG9ydGUsIHBhcmEgZWZlaXRvcyBkZSBzZWd1cmFuw6dhLCBwcmVzZXJ2YcOnw6NvIChiYWNrdXApIGUgYWNlc3NvOwplKSBEZWNsYXJhIHF1ZSBvIGRvY3VtZW50byBzdWJtZXRpZG8gw6kgbyBzZXUgdHJhYmFsaG8gb3JpZ2luYWwgZSBxdWUgZGV0w6ltIG8gZGlyZWl0byBkZSBjb25jZWRlciBhIHRlcmNlaXJvcyBvcyBkaXJlaXRvcyBjb250aWRvcyBuZXN0YSBsaWNlbsOnYS4gRGVjbGFyYSB0YW1iw6ltIHF1ZSBhIGVudHJlZ2EgZG8gZG9jdW1lbnRvIG7Do28gaW5mcmluZ2Ugb3MgZGlyZWl0b3MgZGUgb3V0cmEgcGVzc29hIG91IGVudGlkYWRlOwpmKSBEZWNsYXJhIHF1ZSwgbm8gY2FzbyBkbyBkb2N1bWVudG8gc3VibWV0aWRvIGNvbnRlciBtYXRlcmlhbCBkbyBxdWFsIG7Do28gZGV0w6ltIG9zIGRpcmVpdG9zIGRlCmF1dG9yLCBvYnRldmUgYSBhdXRvcml6YcOnw6NvIGlycmVzdHJpdGEgZG8gcmVzcGVjdGl2byBkZXRlbnRvciBkZXNzZXMgZGlyZWl0b3MgcGFyYSBjZWRlciDDoApVRlBFIG9zIGRpcmVpdG9zIHJlcXVlcmlkb3MgcG9yIGVzdGEgTGljZW7Dp2EgZSBhdXRvcml6YXIgYSB1bml2ZXJzaWRhZGUgYSB1dGlsaXrDoS1sb3MgbGVnYWxtZW50ZS4gRGVjbGFyYSB0YW1iw6ltIHF1ZSBlc3NlIG1hdGVyaWFsIGN1am9zIGRpcmVpdG9zIHPDo28gZGUgdGVyY2Vpcm9zIGVzdMOhIGNsYXJhbWVudGUgaWRlbnRpZmljYWRvIGUgcmVjb25oZWNpZG8gbm8gdGV4dG8gb3UgY29udGXDumRvIGRvIGRvY3VtZW50byBlbnRyZWd1ZTsKZykgU2UgbyBkb2N1bWVudG8gZW50cmVndWUgw6kgYmFzZWFkbyBlbSB0cmFiYWxobyBmaW5hbmNpYWRvIG91IGFwb2lhZG8gcG9yIG91dHJhIGluc3RpdHVpw6fDo28gcXVlIG7Do28gYSBVRlBFLMKgZGVjbGFyYSBxdWUgY3VtcHJpdSBxdWFpc3F1ZXIgb2JyaWdhw6fDtWVzIGV4aWdpZGFzIHBlbG8gcmVzcGVjdGl2byBjb250cmF0byBvdSBhY29yZG8uCgpBIFVGUEUgaWRlbnRpZmljYXLDoSBjbGFyYW1lbnRlIG8ocykgbm9tZShzKSBkbyhzKSBhdXRvciAoZXMpIGRvcyBkaXJlaXRvcyBkbyBkb2N1bWVudG8gZW50cmVndWUgZSBuw6NvIGZhcsOhIHF1YWxxdWVyIGFsdGVyYcOnw6NvLCBwYXJhIGFsw6ltIGRvIHByZXZpc3RvIG5hIGFsw61uZWEgYykuCg==Repositório InstitucionalPUBhttps://repositorio.ufpe.br/oai/requestattena@ufpe.bropendoar:22212019-10-25T10:37:15Repositório Institucional da UFPE - Universidade Federal de Pernambuco (UFPE)false |
dc.title.pt_BR.fl_str_mv |
Object detection and pose estimation from natural features for augmented reality in complex scenes |
title |
Object detection and pose estimation from natural features for augmented reality in complex scenes |
spellingShingle |
Object detection and pose estimation from natural features for augmented reality in complex scenes SIMOES, Francisco Paulo Magalhaes Visão computacional Realidade aumentada Augmented Reality Computer Vision Industry Application Depth Assisted Rectification Learning Based Tracking |
title_short |
Object detection and pose estimation from natural features for augmented reality in complex scenes |
title_full |
Object detection and pose estimation from natural features for augmented reality in complex scenes |
title_fullStr |
Object detection and pose estimation from natural features for augmented reality in complex scenes |
title_full_unstemmed |
Object detection and pose estimation from natural features for augmented reality in complex scenes |
title_sort |
Object detection and pose estimation from natural features for augmented reality in complex scenes |
author |
SIMOES, Francisco Paulo Magalhaes |
author_facet |
SIMOES, Francisco Paulo Magalhaes |
author_role |
author |
dc.contributor.authorLattes.pt_BR.fl_str_mv |
http://lattes.cnpq.br/4321649532287831 |
dc.contributor.advisorLattes.pt_BR.fl_str_mv |
http://lattes.cnpq.br/3355338790654065 |
dc.contributor.author.fl_str_mv |
SIMOES, Francisco Paulo Magalhaes |
dc.contributor.advisor1.fl_str_mv |
TEICHRIEB, Veronica |
contributor_str_mv |
TEICHRIEB, Veronica |
dc.subject.por.fl_str_mv |
Visão computacional Realidade aumentada Augmented Reality Computer Vision Industry Application Depth Assisted Rectification Learning Based Tracking |
topic |
Visão computacional Realidade aumentada Augmented Reality Computer Vision Industry Application Depth Assisted Rectification Learning Based Tracking |
description |
Alignment of virtual elements to the real world scenes (known as detection and tracking) relying on features that are naturally present on the scene is one of the most important challenges in Augmented Reality. When it goes to complex scenes like industrial scenarios, the problem gets bigger with the lack of features and models, high specularity and others. Based on these problems, this PhD thesis addresses the question “How to improve object detection and pose estimation from natural features for AR when dealing with complex scenes problems?”. In order to answer this question, we need to ask ourselves “What are the challenges that we face when developing a new tracker for real world scenarios?”. We begin to answer these questions by developing a complete tracking system that tackles some characteristics typically found in industrial scenarios. This system was validated in a tracking competition organized by the most important AR conference in the world, called ISMAR. During the contest, two complementary problems to tracking were also discussed: calibration, procedure which puts the virtual information in the same coordinate system of the real world, and 3D reconstruction, which is responsible for creating 3D models of the scene to be used for tracking. Because many trackers need a pre-acquired model of the target objects, the quality of the generated geometric model of the objects influences the tracker, as observed on the tracking contest. Sometimes these models are available but in other cases their acquisition represents a great effort (manually) or cost (laser scanning). Because of this we decided to analyze how difficult it is today to automatically recover 3D geometry from complex 3D scenes by using only video. In our case, we considered an electrical substation as a complex 3D scene. Based on the acquired knowledge from previous experiments, we decided to first tackle the problem of improving the tracking for scenes where we can use recent RGB-D sensors during model generation and tracking. We developed a technique called DARP, Depth Assisted Rectification of Patches, which can improve matching by using rectified features based on patches normals. We analyzed this new technique under different synthetic and real scenes and improved the results over traditional texture based trackers like ORB, DAFT or SIFT. Since model generation is a difficult problem in complex scenes, our second proposed tracking approach does not depend on these geometric models and aims to track texture or textureless objects. We applied a supervised learning technique, called Gradient Boosting Trees (GBTs) to solve the tracking as a linear regression problem. We developed this technique by using image gradients and analyzing their relationship with tracking parameters. We also proposed an improvement over GBTs by using traditional tracking approaches together with them, like intensity or edge based features which turned their piecewise constant function to a more robust piecewise linear function. With the new approach, it was possible to track textureless objects like a black and white map for example. |
publishDate |
2016 |
dc.date.issued.fl_str_mv |
2016-03-07 |
dc.date.accessioned.fl_str_mv |
2017-11-29T16:49:07Z |
dc.date.available.fl_str_mv |
2017-11-29T16:49:07Z |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/doctoralThesis |
format |
doctoralThesis |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
https://repositorio.ufpe.br/handle/123456789/22417 |
url |
https://repositorio.ufpe.br/handle/123456789/22417 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.rights.driver.fl_str_mv |
Attribution-NonCommercial-NoDerivs 3.0 Brazil http://creativecommons.org/licenses/by-nc-nd/3.0/br/ info:eu-repo/semantics/openAccess |
rights_invalid_str_mv |
Attribution-NonCommercial-NoDerivs 3.0 Brazil http://creativecommons.org/licenses/by-nc-nd/3.0/br/ |
eu_rights_str_mv |
openAccess |
dc.publisher.none.fl_str_mv |
Universidade Federal de Pernambuco |
dc.publisher.program.fl_str_mv |
Programa de Pos Graduacao em Ciencia da Computacao |
dc.publisher.initials.fl_str_mv |
UFPE |
dc.publisher.country.fl_str_mv |
Brasil |
publisher.none.fl_str_mv |
Universidade Federal de Pernambuco |
dc.source.none.fl_str_mv |
reponame:Repositório Institucional da UFPE instname:Universidade Federal de Pernambuco (UFPE) instacron:UFPE |
instname_str |
Universidade Federal de Pernambuco (UFPE) |
instacron_str |
UFPE |
institution |
UFPE |
reponame_str |
Repositório Institucional da UFPE |
collection |
Repositório Institucional da UFPE |
bitstream.url.fl_str_mv |
https://repositorio.ufpe.br/bitstream/123456789/22417/5/TeseFinal_fpms.pdf.jpg https://repositorio.ufpe.br/bitstream/123456789/22417/1/TeseFinal_fpms.pdf https://repositorio.ufpe.br/bitstream/123456789/22417/2/license_rdf https://repositorio.ufpe.br/bitstream/123456789/22417/3/license.txt https://repositorio.ufpe.br/bitstream/123456789/22417/4/TeseFinal_fpms.pdf.txt |
bitstream.checksum.fl_str_mv |
6ccad0b4776c099eef68c273638842e0 c84c50e3c8588d6c85e44f9ac6343200 e39d27027a6cc9cb039ad269a5db8e34 4b8a02c7f2818eaf00dcf2260dd5eb08 3d4f47fba4a9e5c4b0ef749e74938ba0 |
bitstream.checksumAlgorithm.fl_str_mv |
MD5 MD5 MD5 MD5 MD5 |
repository.name.fl_str_mv |
Repositório Institucional da UFPE - Universidade Federal de Pernambuco (UFPE) |
repository.mail.fl_str_mv |
attena@ufpe.br |
_version_ |
1813709103623569408 |