MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável

Detalhes bibliográficos
Ano de defesa: 2022
Autor(a) principal: Carvalho, Niltemberg de Oliveira
Orientador(a): Sampaio, Andréia Libório
Banca de defesa: Não Informado pela instituição
Tipo de documento: Dissertação
Tipo de acesso: Acesso aberto
Idioma: por
Instituição de defesa: Não Informado pela instituição
Programa de Pós-Graduação: Não Informado pela instituição
Departamento: Não Informado pela instituição
País: Não Informado pela instituição
Palavras-chave em Português:
Link de acesso: http://www.repositorio.ufc.br/handle/riufc/70423
Resumo: The interest in systems that use machine learning has been growing in recent years. Some algorithms implemented in these intelligent systems hide their fundamental assumptions, input information and parameters in black box models that are not directly observable. The adoption of these systems in sensitive and large-scale application domains involves several ethical issues. One way to promote these ethics requirements is to improve the explainability of these models. However, explainability may have different goals and content according to the intended audience (developers, domain experts, and end-users. Some explanations does not always represent the requirements of the end-users, because developers and users do not share the same social meaning system, making it difficult to build more effective explanations. This paper proposes a conceptual model, based on Semiotic Engineering, which explores the problem of explanation as a communicative process, in which designers and users work together on requirements on explanations. A Model to Reason about the eXplanation design in Artificial Intelligence Systems (MoReXAI) is based on a structured conversation, with promotes reflection on subjects such as Privacy, Fairness, Accountability, Equity and Explainability, aiming to help end-users understand how the systems work and supporting the explanation design system. The model can work as an epistemic tool, given the reflections raised in the conversations related to the topics of ethical principles, which helped in the process of raising important requirements for the design of the explanation.
id UFC-7_cd56df9da59a70b2aedd731339949637
oai_identifier_str oai:repositorio.ufc.br:riufc/70423
network_acronym_str UFC-7
network_name_str Repositório Institucional da Universidade Federal do Ceará (UFC)
repository_id_str
spelling Carvalho, Niltemberg de OliveiraSampaio, Andréia Libório2023-02-02T14:25:12Z2023-02-02T14:25:12Z2022CARVALHO, Niltemberg de Oliveira. MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável. 2022. 94 f. Dissertação (mestrado) – Universidade Federal do Ceará, Campus de Quixadá, Programa de Pós-Graduação em Computação, Quixadá, 2022.http://www.repositorio.ufc.br/handle/riufc/70423The interest in systems that use machine learning has been growing in recent years. Some algorithms implemented in these intelligent systems hide their fundamental assumptions, input information and parameters in black box models that are not directly observable. The adoption of these systems in sensitive and large-scale application domains involves several ethical issues. One way to promote these ethics requirements is to improve the explainability of these models. However, explainability may have different goals and content according to the intended audience (developers, domain experts, and end-users. Some explanations does not always represent the requirements of the end-users, because developers and users do not share the same social meaning system, making it difficult to build more effective explanations. This paper proposes a conceptual model, based on Semiotic Engineering, which explores the problem of explanation as a communicative process, in which designers and users work together on requirements on explanations. A Model to Reason about the eXplanation design in Artificial Intelligence Systems (MoReXAI) is based on a structured conversation, with promotes reflection on subjects such as Privacy, Fairness, Accountability, Equity and Explainability, aiming to help end-users understand how the systems work and supporting the explanation design system. The model can work as an epistemic tool, given the reflections raised in the conversations related to the topics of ethical principles, which helped in the process of raising important requirements for the design of the explanation.O interesse por sistemas que utilizam aprendizado de máquina vem crescendo nos últimos anos. Alguns algoritmos implementados nesses sistemas inteligentes ocultam seus pressupostos fundamentais: informações de entrada e parâmetros em modelos caixa preta que não são diretamente observáveis. A adoção desses sistemas em domínios de aplicação sensíveis e de larga escala envolve várias questões éticas. Uma maneira de promover esses requisitos éticos é melhorar a explicabilidade desses modelos. No entanto, a explicabilidade pode ter objetivos e conteúdos diferentes de acordo com o público-alvo (desenvolvedores, especialistas de domínio e usuários finais). Algumas explicações nem sempre representam os requisitos dos usuários finais, pois desenvolvedores e usuários não compartilham o mesmo significado social. Este trabalho propõe um modelo conceitual, baseado na Engenharia Semiótica, que explora o problema da explicação como um processo comunicativo, no qual designers e usuários trabalham juntos em requisitos de explicações. O Modelo para Refletir sobre Inteligência Artificial Explicável (MoReXAI) é baseado em uma conversa estruturada, que promove a reflexão sobre temas como privacidade, justiça, responsabilidade, equidade e explicabilidade, visando ajudar os usuários finais a entender como os sistemas funcionam e apoiar a explicação. O modelo pode funcionar como uma ferramenta epistêmica, dadas as reflexões levantadas nas conversas relacionadas aos temas de princípios éticos, que auxiliaram no processo de levantamento de requisitos importantes para o desenho da explicação.Engenharia SemióticaInteligência ArtificialÉticaMOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicávelinfo:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/masterThesisporreponame:Repositório Institucional da Universidade Federal do Ceará (UFC)instname:Universidade Federal do Ceará (UFC)instacron:UFCinfo:eu-repo/semantics/openAccessLICENSElicense.txtlicense.txttext/plain; charset=utf-81748http://repositorio.ufc.br/bitstream/riufc/70423/2/license.txt8a4605be74aa9ea9d79846c1fba20a33MD52ORIGINAL2022_dissertacao_ndeocarvalho.pdf2022_dissertacao_ndeocarvalho.pdfapplication/pdf3249004http://repositorio.ufc.br/bitstream/riufc/70423/1/2022_dissertacao_ndeocarvalho.pdf95987d5062bcb89bc41c269852f4d4a6MD51riufc/704232023-02-02 11:25:12.876oai:repositorio.ufc.br:riufc/70423Tk9URTogUExBQ0UgWU9VUiBPV04gTElDRU5TRSBIRVJFClRoaXMgc2FtcGxlIGxpY2Vuc2UgaXMgcHJvdmlkZWQgZm9yIGluZm9ybWF0aW9uYWwgcHVycG9zZXMgb25seS4KCk5PTi1FWENMVVNJVkUgRElTVFJJQlVUSU9OIExJQ0VOU0UKCkJ5IHNpZ25pbmcgYW5kIHN1Ym1pdHRpbmcgdGhpcyBsaWNlbnNlLCB5b3UgKHRoZSBhdXRob3Iocykgb3IgY29weXJpZ2h0Cm93bmVyKSBncmFudHMgdG8gRFNwYWNlIFVuaXZlcnNpdHkgKERTVSkgdGhlIG5vbi1leGNsdXNpdmUgcmlnaHQgdG8gcmVwcm9kdWNlLAp0cmFuc2xhdGUgKGFzIGRlZmluZWQgYmVsb3cpLCBhbmQvb3IgZGlzdHJpYnV0ZSB5b3VyIHN1Ym1pc3Npb24gKGluY2x1ZGluZwp0aGUgYWJzdHJhY3QpIHdvcmxkd2lkZSBpbiBwcmludCBhbmQgZWxlY3Ryb25pYyBmb3JtYXQgYW5kIGluIGFueSBtZWRpdW0sCmluY2x1ZGluZyBidXQgbm90IGxpbWl0ZWQgdG8gYXVkaW8gb3IgdmlkZW8uCgpZb3UgYWdyZWUgdGhhdCBEU1UgbWF5LCB3aXRob3V0IGNoYW5naW5nIHRoZSBjb250ZW50LCB0cmFuc2xhdGUgdGhlCnN1Ym1pc3Npb24gdG8gYW55IG1lZGl1bSBvciBmb3JtYXQgZm9yIHRoZSBwdXJwb3NlIG9mIHByZXNlcnZhdGlvbi4KCllvdSBhbHNvIGFncmVlIHRoYXQgRFNVIG1heSBrZWVwIG1vcmUgdGhhbiBvbmUgY29weSBvZiB0aGlzIHN1Ym1pc3Npb24gZm9yCnB1cnBvc2VzIG9mIHNlY3VyaXR5LCBiYWNrLXVwIGFuZCBwcmVzZXJ2YXRpb24uCgpZb3UgcmVwcmVzZW50IHRoYXQgdGhlIHN1Ym1pc3Npb24gaXMgeW91ciBvcmlnaW5hbCB3b3JrLCBhbmQgdGhhdCB5b3UgaGF2ZQp0aGUgcmlnaHQgdG8gZ3JhbnQgdGhlIHJpZ2h0cyBjb250YWluZWQgaW4gdGhpcyBsaWNlbnNlLiBZb3UgYWxzbyByZXByZXNlbnQKdGhhdCB5b3VyIHN1Ym1pc3Npb24gZG9lcyBub3QsIHRvIHRoZSBiZXN0IG9mIHlvdXIga25vd2xlZGdlLCBpbmZyaW5nZSB1cG9uCmFueW9uZSdzIGNvcHlyaWdodC4KCklmIHRoZSBzdWJtaXNzaW9uIGNvbnRhaW5zIG1hdGVyaWFsIGZvciB3aGljaCB5b3UgZG8gbm90IGhvbGQgY29weXJpZ2h0LAp5b3UgcmVwcmVzZW50IHRoYXQgeW91IGhhdmUgb2J0YWluZWQgdGhlIHVucmVzdHJpY3RlZCBwZXJtaXNzaW9uIG9mIHRoZQpjb3B5cmlnaHQgb3duZXIgdG8gZ3JhbnQgRFNVIHRoZSByaWdodHMgcmVxdWlyZWQgYnkgdGhpcyBsaWNlbnNlLCBhbmQgdGhhdApzdWNoIHRoaXJkLXBhcnR5IG93bmVkIG1hdGVyaWFsIGlzIGNsZWFybHkgaWRlbnRpZmllZCBhbmQgYWNrbm93bGVkZ2VkCndpdGhpbiB0aGUgdGV4dCBvciBjb250ZW50IG9mIHRoZSBzdWJtaXNzaW9uLgoKSUYgVEhFIFNVQk1JU1NJT04gSVMgQkFTRUQgVVBPTiBXT1JLIFRIQVQgSEFTIEJFRU4gU1BPTlNPUkVEIE9SIFNVUFBPUlRFRApCWSBBTiBBR0VOQ1kgT1IgT1JHQU5JWkFUSU9OIE9USEVSIFRIQU4gRFNVLCBZT1UgUkVQUkVTRU5UIFRIQVQgWU9VIEhBVkUKRlVMRklMTEVEIEFOWSBSSUdIVCBPRiBSRVZJRVcgT1IgT1RIRVIgT0JMSUdBVElPTlMgUkVRVUlSRUQgQlkgU1VDSApDT05UUkFDVCBPUiBBR1JFRU1FTlQuCgpEU1Ugd2lsbCBjbGVhcmx5IGlkZW50aWZ5IHlvdXIgbmFtZShzKSBhcyB0aGUgYXV0aG9yKHMpIG9yIG93bmVyKHMpIG9mIHRoZQpzdWJtaXNzaW9uLCBhbmQgd2lsbCBub3QgbWFrZSBhbnkgYWx0ZXJhdGlvbiwgb3RoZXIgdGhhbiBhcyBhbGxvd2VkIGJ5IHRoaXMKbGljZW5zZSwgdG8geW91ciBzdWJtaXNzaW9uLgo=Repositório InstitucionalPUBhttp://www.repositorio.ufc.br/ri-oai/requestbu@ufc.br || repositorio@ufc.bropendoar:2023-02-02T14:25:12Repositório Institucional da Universidade Federal do Ceará (UFC) - Universidade Federal do Ceará (UFC)false
dc.title.pt_BR.fl_str_mv MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável
title MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável
spellingShingle MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável
Carvalho, Niltemberg de Oliveira
Engenharia Semiótica
Inteligência Artificial
Ética
title_short MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável
title_full MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável
title_fullStr MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável
title_full_unstemmed MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável
title_sort MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável
author Carvalho, Niltemberg de Oliveira
author_facet Carvalho, Niltemberg de Oliveira
author_role author
dc.contributor.author.fl_str_mv Carvalho, Niltemberg de Oliveira
dc.contributor.advisor1.fl_str_mv Sampaio, Andréia Libório
contributor_str_mv Sampaio, Andréia Libório
dc.subject.por.fl_str_mv Engenharia Semiótica
Inteligência Artificial
Ética
topic Engenharia Semiótica
Inteligência Artificial
Ética
description The interest in systems that use machine learning has been growing in recent years. Some algorithms implemented in these intelligent systems hide their fundamental assumptions, input information and parameters in black box models that are not directly observable. The adoption of these systems in sensitive and large-scale application domains involves several ethical issues. One way to promote these ethics requirements is to improve the explainability of these models. However, explainability may have different goals and content according to the intended audience (developers, domain experts, and end-users. Some explanations does not always represent the requirements of the end-users, because developers and users do not share the same social meaning system, making it difficult to build more effective explanations. This paper proposes a conceptual model, based on Semiotic Engineering, which explores the problem of explanation as a communicative process, in which designers and users work together on requirements on explanations. A Model to Reason about the eXplanation design in Artificial Intelligence Systems (MoReXAI) is based on a structured conversation, with promotes reflection on subjects such as Privacy, Fairness, Accountability, Equity and Explainability, aiming to help end-users understand how the systems work and supporting the explanation design system. The model can work as an epistemic tool, given the reflections raised in the conversations related to the topics of ethical principles, which helped in the process of raising important requirements for the design of the explanation.
publishDate 2022
dc.date.issued.fl_str_mv 2022
dc.date.accessioned.fl_str_mv 2023-02-02T14:25:12Z
dc.date.available.fl_str_mv 2023-02-02T14:25:12Z
dc.type.status.fl_str_mv info:eu-repo/semantics/publishedVersion
dc.type.driver.fl_str_mv info:eu-repo/semantics/masterThesis
format masterThesis
status_str publishedVersion
dc.identifier.citation.fl_str_mv CARVALHO, Niltemberg de Oliveira. MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável. 2022. 94 f. Dissertação (mestrado) – Universidade Federal do Ceará, Campus de Quixadá, Programa de Pós-Graduação em Computação, Quixadá, 2022.
dc.identifier.uri.fl_str_mv http://www.repositorio.ufc.br/handle/riufc/70423
identifier_str_mv CARVALHO, Niltemberg de Oliveira. MOREXAI: Um modelo para refletir sobre Inteligência Artificial Explicável. 2022. 94 f. Dissertação (mestrado) – Universidade Federal do Ceará, Campus de Quixadá, Programa de Pós-Graduação em Computação, Quixadá, 2022.
url http://www.repositorio.ufc.br/handle/riufc/70423
dc.language.iso.fl_str_mv por
language por
dc.rights.driver.fl_str_mv info:eu-repo/semantics/openAccess
eu_rights_str_mv openAccess
dc.source.none.fl_str_mv reponame:Repositório Institucional da Universidade Federal do Ceará (UFC)
instname:Universidade Federal do Ceará (UFC)
instacron:UFC
instname_str Universidade Federal do Ceará (UFC)
instacron_str UFC
institution UFC
reponame_str Repositório Institucional da Universidade Federal do Ceará (UFC)
collection Repositório Institucional da Universidade Federal do Ceará (UFC)
bitstream.url.fl_str_mv http://repositorio.ufc.br/bitstream/riufc/70423/2/license.txt
http://repositorio.ufc.br/bitstream/riufc/70423/1/2022_dissertacao_ndeocarvalho.pdf
bitstream.checksum.fl_str_mv 8a4605be74aa9ea9d79846c1fba20a33
95987d5062bcb89bc41c269852f4d4a6
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
repository.name.fl_str_mv Repositório Institucional da Universidade Federal do Ceará (UFC) - Universidade Federal do Ceará (UFC)
repository.mail.fl_str_mv bu@ufc.br || repositorio@ufc.br
_version_ 1847793262666973184