Self-supervised imitation learning from observation
| Ano de defesa: | 2021 |
|---|---|
| Autor(a) principal: | |
| Orientador(a): | |
| Banca de defesa: | |
| Tipo de documento: | Dissertação |
| Tipo de acesso: | Acesso aberto |
| Idioma: | eng |
| Instituição de defesa: |
Pontifícia Universidade Católica do Rio Grande do Sul
Escola Politécnica Brasil PUCRS Programa de Pós-Graduação em Ciência da Computação |
| Programa de Pós-Graduação: |
Não Informado pela instituição
|
| Departamento: |
Não Informado pela instituição
|
| País: |
Não Informado pela instituição
|
| Palavras-chave em Português: | |
| Link de acesso: | http://tede2.pucrs.br/tede2/handle/tede/9778 |
Resumo: | Humans have the ability to learn through observation. The computational equivalent of learning by observation is behavioral cloning, an imitation learning technique that teaches an agent how to behave through expert demonstrations. Recent approaches work towards making use of unlabeled data with fully-observable snapshots of the states, decoding the observed information into actions in a self-supervised fashion. However, there are several problems still left to be addressed, including the many times the iterative learning scheme gets stuck into bad local minima. In this work, we propose three different methods, Augmented Behavioral Cloning from Observation, Imitating Unknown Policies via Exploration, and Combined Reinforcement and Imitation Learning, which aim to solve the problems of the decaying learning process, nonexplorative policies, and sample efficiency during the iterative process. The results from Augmented Behavioral Cloning from Observations show that a sampling mechanism can create more appropriate iterative learning cycles, while Imitating Unknown Policies via Exploration results convey that an exploration strategy can achieve results even better than the expert, reaching the state-of-the-art of the task. Lastly, the Combined Reinforcement and Imitation Learning framework shows that adding a reinforcement learning method within the imitation learning framework can create more efficient policies and reach similar results to the second method with fewer samples. Both the second and the third methods offer distinct trade-offs between performance and efficiency, depending on the difficulty of acquiring expert samples. |
| id |
P_RS_5050f9c6a9e262f8b8ba0860eec0d942 |
|---|---|
| oai_identifier_str |
oai:tede2.pucrs.br:tede/9778 |
| network_acronym_str |
P_RS |
| network_name_str |
Biblioteca Digital de Teses e Dissertações da PUC_RS |
| repository_id_str |
|
| spelling |
Self-supervised imitation learning from observationImitation LearningBehavioral CloningSelf-supervised LearningAprendizado por ImitaçãoClonagem de ComportamentoAprendizado Auto-SupervisionadoCIENCIA DA COMPUTACAO::TEORIA DA COMPUTACAOHumans have the ability to learn through observation. The computational equivalent of learning by observation is behavioral cloning, an imitation learning technique that teaches an agent how to behave through expert demonstrations. Recent approaches work towards making use of unlabeled data with fully-observable snapshots of the states, decoding the observed information into actions in a self-supervised fashion. However, there are several problems still left to be addressed, including the many times the iterative learning scheme gets stuck into bad local minima. In this work, we propose three different methods, Augmented Behavioral Cloning from Observation, Imitating Unknown Policies via Exploration, and Combined Reinforcement and Imitation Learning, which aim to solve the problems of the decaying learning process, nonexplorative policies, and sample efficiency during the iterative process. The results from Augmented Behavioral Cloning from Observations show that a sampling mechanism can create more appropriate iterative learning cycles, while Imitating Unknown Policies via Exploration results convey that an exploration strategy can achieve results even better than the expert, reaching the state-of-the-art of the task. Lastly, the Combined Reinforcement and Imitation Learning framework shows that adding a reinforcement learning method within the imitation learning framework can create more efficient policies and reach similar results to the second method with fewer samples. Both the second and the third methods offer distinct trade-offs between performance and efficiency, depending on the difficulty of acquiring expert samples.Os seres humanos têm a capacidade de aprender através da observação. O equivalente computacional deste aprendizado se chama clonagem de comportamento, uma técnica de aprendizado por imitação na qual um agente estuda o comportamento de um especialista. Abordagens recentes trabalham no uso de dados não rotulados com representações fidedignas dos estados, decodificando as informações observadas em ações de maneira auto-supervisionada. No entanto, ainda existem vários problemas a serem resolvidos, incluindo problemas de mínimos locais e dependência de vetores de estados. Nesta dissertação, apresentamos três novos métodos de aprendizado por imitação: Augmented Behavioral Cloning from Observation, Imitating Unknown Policies via Exploration, e Combined Reinforcement and Imitation Learning, que têm por objetivo resolver os problemas de decaimento de aprendizado durante o processo iterativo, de falta de políticas não-exploratórias, e de fraca eficiência de amostragem durante o treinamento dos agentes. Os resultados de Augmented Behavioral Cloning from Observations mostram que um mecanismo de amostragem pode criar ciclos de aprendizagem iterativos mais apropriados. Já os experimentos com Imitating Unknown Policies via Exploration ressaltam que um mecanismo de exploração pode alcançar resultados superiores do especialista e bater o estado da arte. Por fim, a análise do framework de Combined Reinforcement and Imitation Learning, mostra que adicionar um mecanismo de aprendizagem por reforço pode criar políticas mais eficientes e chegar a resultados semelhantes ao segundo método, mas com muito menos amostras. O segundo e o terceiro métodos oferecem diferentes trade-offs entre desempenho e eficiência, dependendo da dificuldade de aquisição de amostras especializadas.Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPESPontifícia Universidade Católica do Rio Grande do SulEscola PolitécnicaBrasilPUCRSPrograma de Pós-Graduação em Ciência da ComputaçãoBarros, Rodrigo Coelhohttp://lattes.cnpq.br/8172124241767828Gavenski, Nathan Schneider2021-07-08T14:00:00Z2021-03-26info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/masterThesisapplication/pdfhttp://tede2.pucrs.br/tede2/handle/tede/9778enginfo:eu-repo/semantics/openAccessreponame:Biblioteca Digital de Teses e Dissertações da PUC_RSinstname:Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS)instacron:PUC_RS2021-07-08T15:00:19Zoai:tede2.pucrs.br:tede/9778Biblioteca Digital de Teses e Dissertaçõeshttp://tede2.pucrs.br/tede2/PRIhttps://tede2.pucrs.br/oai/requestbiblioteca.central@pucrs.br||opendoar:2021-07-08T15:00:19Biblioteca Digital de Teses e Dissertações da PUC_RS - Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS)false |
| dc.title.none.fl_str_mv |
Self-supervised imitation learning from observation |
| title |
Self-supervised imitation learning from observation |
| spellingShingle |
Self-supervised imitation learning from observation Gavenski, Nathan Schneider Imitation Learning Behavioral Cloning Self-supervised Learning Aprendizado por Imitação Clonagem de Comportamento Aprendizado Auto-Supervisionado CIENCIA DA COMPUTACAO::TEORIA DA COMPUTACAO |
| title_short |
Self-supervised imitation learning from observation |
| title_full |
Self-supervised imitation learning from observation |
| title_fullStr |
Self-supervised imitation learning from observation |
| title_full_unstemmed |
Self-supervised imitation learning from observation |
| title_sort |
Self-supervised imitation learning from observation |
| author |
Gavenski, Nathan Schneider |
| author_facet |
Gavenski, Nathan Schneider |
| author_role |
author |
| dc.contributor.none.fl_str_mv |
Barros, Rodrigo Coelho http://lattes.cnpq.br/8172124241767828 |
| dc.contributor.author.fl_str_mv |
Gavenski, Nathan Schneider |
| dc.subject.por.fl_str_mv |
Imitation Learning Behavioral Cloning Self-supervised Learning Aprendizado por Imitação Clonagem de Comportamento Aprendizado Auto-Supervisionado CIENCIA DA COMPUTACAO::TEORIA DA COMPUTACAO |
| topic |
Imitation Learning Behavioral Cloning Self-supervised Learning Aprendizado por Imitação Clonagem de Comportamento Aprendizado Auto-Supervisionado CIENCIA DA COMPUTACAO::TEORIA DA COMPUTACAO |
| description |
Humans have the ability to learn through observation. The computational equivalent of learning by observation is behavioral cloning, an imitation learning technique that teaches an agent how to behave through expert demonstrations. Recent approaches work towards making use of unlabeled data with fully-observable snapshots of the states, decoding the observed information into actions in a self-supervised fashion. However, there are several problems still left to be addressed, including the many times the iterative learning scheme gets stuck into bad local minima. In this work, we propose three different methods, Augmented Behavioral Cloning from Observation, Imitating Unknown Policies via Exploration, and Combined Reinforcement and Imitation Learning, which aim to solve the problems of the decaying learning process, nonexplorative policies, and sample efficiency during the iterative process. The results from Augmented Behavioral Cloning from Observations show that a sampling mechanism can create more appropriate iterative learning cycles, while Imitating Unknown Policies via Exploration results convey that an exploration strategy can achieve results even better than the expert, reaching the state-of-the-art of the task. Lastly, the Combined Reinforcement and Imitation Learning framework shows that adding a reinforcement learning method within the imitation learning framework can create more efficient policies and reach similar results to the second method with fewer samples. Both the second and the third methods offer distinct trade-offs between performance and efficiency, depending on the difficulty of acquiring expert samples. |
| publishDate |
2021 |
| dc.date.none.fl_str_mv |
2021-07-08T14:00:00Z 2021-03-26 |
| dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
| dc.type.driver.fl_str_mv |
info:eu-repo/semantics/masterThesis |
| format |
masterThesis |
| status_str |
publishedVersion |
| dc.identifier.uri.fl_str_mv |
http://tede2.pucrs.br/tede2/handle/tede/9778 |
| url |
http://tede2.pucrs.br/tede2/handle/tede/9778 |
| dc.language.iso.fl_str_mv |
eng |
| language |
eng |
| dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
| eu_rights_str_mv |
openAccess |
| dc.format.none.fl_str_mv |
application/pdf |
| dc.publisher.none.fl_str_mv |
Pontifícia Universidade Católica do Rio Grande do Sul Escola Politécnica Brasil PUCRS Programa de Pós-Graduação em Ciência da Computação |
| publisher.none.fl_str_mv |
Pontifícia Universidade Católica do Rio Grande do Sul Escola Politécnica Brasil PUCRS Programa de Pós-Graduação em Ciência da Computação |
| dc.source.none.fl_str_mv |
reponame:Biblioteca Digital de Teses e Dissertações da PUC_RS instname:Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS) instacron:PUC_RS |
| instname_str |
Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS) |
| instacron_str |
PUC_RS |
| institution |
PUC_RS |
| reponame_str |
Biblioteca Digital de Teses e Dissertações da PUC_RS |
| collection |
Biblioteca Digital de Teses e Dissertações da PUC_RS |
| repository.name.fl_str_mv |
Biblioteca Digital de Teses e Dissertações da PUC_RS - Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS) |
| repository.mail.fl_str_mv |
biblioteca.central@pucrs.br|| |
| _version_ |
1850041304398954496 |