Semantic segmentation with multi-source domain adaptation for radiological images
| Ano de defesa: | 2020 |
|---|---|
| Autor(a) principal: | |
| Orientador(a): | |
| Banca de defesa: | |
| Tipo de documento: | Tese |
| Tipo de acesso: | Acesso aberto |
| Idioma: | eng |
| Instituição de defesa: |
Universidade Federal de Minas Gerais
|
| Programa de Pós-Graduação: |
Não Informado pela instituição
|
| Departamento: |
Não Informado pela instituição
|
| País: |
Não Informado pela instituição
|
| Palavras-chave em Português: | |
| Link de acesso: | https://hdl.handle.net/1843/51331 |
Resumo: | Distinct digitization techniques for biomedical images yield different visual patterns in samples from many radiological exams. These differences may hamper the use of data-driven Machine Learning approaches for inference over these images, such as Deep Learning. Another difficulty in this field is the lack of labeled data, even though in many cases there is an abundance of unlabeled data available. Therefore an important step in improving the generalization capabilities of these methods is to perform Unsupervised and Semi-Supervised Domain Adaptation between different datasets of biomedical images. In order to tackle this problem, in this work, we propose an Unsupervised and Semi-Supervised Domain Adaptation method for dense labeling tasks in biomedical images using Generative Adversarial Networks for Unsupervised Image-to-Image Translation. We merge these generative models with well-known supervised deep semantic segmentation architectures in order to create two semi-supervised methods capable of learning from both unlabeled and labeled data, whenever labeling is available. The first Domain-to-Domain method, similarly to most other Image Translation methods in the literature, is limited to a pair of domains: one source and one target. The second proposed methodology takes advantage of conditional dataset training to encourage Domain Generalization from several data sources from the same domain. From this conditional dataset encoding, we also devise a fully novel pipeline for rib segmentation in X-Ray images that does not require any label to be computed. We compare our method using a myriad of domains, datasets, segmentation tasks and traditional baselines in the Domain Adaptation literature, such as using pretrained models both with and without fine-tuning. We perform both quantitative and qualitative analysis of the proposed method and baselines in the multitude of distinct scenarios considered in our experimental evaluation. We empirically observe the limitations of pairwise Domain Adaptation approaches to truly generalizable radiograph segmentation, evidencing the better performance of multi-source training methods in this task. The proposed Conditional Domain Adaptation method shows consistently and significantly better results than the baselines in scarce labeled data scenarios – that is, when labeled data is limited or non-existent in the target dataset – achieving Jaccard indices greater than 0.9 in most tasks. Completely Unsupervised Domain Adaptation results were observed to be close to the Fully Supervised Domain Adaptation used in the traditional procedure of fine-tuning pretrained Deep Neural Networks. |
| id |
UFMG_9221f932fdf43b463334bee2379564e5 |
|---|---|
| oai_identifier_str |
oai:repositorio.ufmg.br:1843/51331 |
| network_acronym_str |
UFMG |
| network_name_str |
Repositório Institucional da UFMG |
| repository_id_str |
|
| spelling |
Semantic segmentation with multi-source domain adaptation for radiological imagesComputação – Teses.Aprendizado profundoAdaptação de domínioImagens médicasSegmentação de imagens.Aprendizado profundoAdaptação de domínioImagens médicasSegmentação de imagensDistinct digitization techniques for biomedical images yield different visual patterns in samples from many radiological exams. These differences may hamper the use of data-driven Machine Learning approaches for inference over these images, such as Deep Learning. Another difficulty in this field is the lack of labeled data, even though in many cases there is an abundance of unlabeled data available. Therefore an important step in improving the generalization capabilities of these methods is to perform Unsupervised and Semi-Supervised Domain Adaptation between different datasets of biomedical images. In order to tackle this problem, in this work, we propose an Unsupervised and Semi-Supervised Domain Adaptation method for dense labeling tasks in biomedical images using Generative Adversarial Networks for Unsupervised Image-to-Image Translation. We merge these generative models with well-known supervised deep semantic segmentation architectures in order to create two semi-supervised methods capable of learning from both unlabeled and labeled data, whenever labeling is available. The first Domain-to-Domain method, similarly to most other Image Translation methods in the literature, is limited to a pair of domains: one source and one target. The second proposed methodology takes advantage of conditional dataset training to encourage Domain Generalization from several data sources from the same domain. From this conditional dataset encoding, we also devise a fully novel pipeline for rib segmentation in X-Ray images that does not require any label to be computed. We compare our method using a myriad of domains, datasets, segmentation tasks and traditional baselines in the Domain Adaptation literature, such as using pretrained models both with and without fine-tuning. We perform both quantitative and qualitative analysis of the proposed method and baselines in the multitude of distinct scenarios considered in our experimental evaluation. We empirically observe the limitations of pairwise Domain Adaptation approaches to truly generalizable radiograph segmentation, evidencing the better performance of multi-source training methods in this task. The proposed Conditional Domain Adaptation method shows consistently and significantly better results than the baselines in scarce labeled data scenarios – that is, when labeled data is limited or non-existent in the target dataset – achieving Jaccard indices greater than 0.9 in most tasks. Completely Unsupervised Domain Adaptation results were observed to be close to the Fully Supervised Domain Adaptation used in the traditional procedure of fine-tuning pretrained Deep Neural Networks.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorUniversidade Federal de Minas Gerais2023-03-29T15:28:13Z2025-09-09T00:54:11Z2023-03-29T15:28:13Z2020-07-21info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/doctoralThesisapplication/pdfhttps://hdl.handle.net/1843/51331enghttp://creativecommons.org/licenses/by/3.0/pt/info:eu-repo/semantics/openAccessHugo Neves de Oliveirareponame:Repositório Institucional da UFMGinstname:Universidade Federal de Minas Gerais (UFMG)instacron:UFMG2025-09-09T00:54:11Zoai:repositorio.ufmg.br:1843/51331Repositório InstitucionalPUBhttps://repositorio.ufmg.br/oairepositorio@ufmg.bropendoar:2025-09-09T00:54:11Repositório Institucional da UFMG - Universidade Federal de Minas Gerais (UFMG)false |
| dc.title.none.fl_str_mv |
Semantic segmentation with multi-source domain adaptation for radiological images |
| title |
Semantic segmentation with multi-source domain adaptation for radiological images |
| spellingShingle |
Semantic segmentation with multi-source domain adaptation for radiological images Hugo Neves de Oliveira Computação – Teses. Aprendizado profundo Adaptação de domínio Imagens médicas Segmentação de imagens. Aprendizado profundo Adaptação de domínio Imagens médicas Segmentação de imagens |
| title_short |
Semantic segmentation with multi-source domain adaptation for radiological images |
| title_full |
Semantic segmentation with multi-source domain adaptation for radiological images |
| title_fullStr |
Semantic segmentation with multi-source domain adaptation for radiological images |
| title_full_unstemmed |
Semantic segmentation with multi-source domain adaptation for radiological images |
| title_sort |
Semantic segmentation with multi-source domain adaptation for radiological images |
| author |
Hugo Neves de Oliveira |
| author_facet |
Hugo Neves de Oliveira |
| author_role |
author |
| dc.contributor.author.fl_str_mv |
Hugo Neves de Oliveira |
| dc.subject.por.fl_str_mv |
Computação – Teses. Aprendizado profundo Adaptação de domínio Imagens médicas Segmentação de imagens. Aprendizado profundo Adaptação de domínio Imagens médicas Segmentação de imagens |
| topic |
Computação – Teses. Aprendizado profundo Adaptação de domínio Imagens médicas Segmentação de imagens. Aprendizado profundo Adaptação de domínio Imagens médicas Segmentação de imagens |
| description |
Distinct digitization techniques for biomedical images yield different visual patterns in samples from many radiological exams. These differences may hamper the use of data-driven Machine Learning approaches for inference over these images, such as Deep Learning. Another difficulty in this field is the lack of labeled data, even though in many cases there is an abundance of unlabeled data available. Therefore an important step in improving the generalization capabilities of these methods is to perform Unsupervised and Semi-Supervised Domain Adaptation between different datasets of biomedical images. In order to tackle this problem, in this work, we propose an Unsupervised and Semi-Supervised Domain Adaptation method for dense labeling tasks in biomedical images using Generative Adversarial Networks for Unsupervised Image-to-Image Translation. We merge these generative models with well-known supervised deep semantic segmentation architectures in order to create two semi-supervised methods capable of learning from both unlabeled and labeled data, whenever labeling is available. The first Domain-to-Domain method, similarly to most other Image Translation methods in the literature, is limited to a pair of domains: one source and one target. The second proposed methodology takes advantage of conditional dataset training to encourage Domain Generalization from several data sources from the same domain. From this conditional dataset encoding, we also devise a fully novel pipeline for rib segmentation in X-Ray images that does not require any label to be computed. We compare our method using a myriad of domains, datasets, segmentation tasks and traditional baselines in the Domain Adaptation literature, such as using pretrained models both with and without fine-tuning. We perform both quantitative and qualitative analysis of the proposed method and baselines in the multitude of distinct scenarios considered in our experimental evaluation. We empirically observe the limitations of pairwise Domain Adaptation approaches to truly generalizable radiograph segmentation, evidencing the better performance of multi-source training methods in this task. The proposed Conditional Domain Adaptation method shows consistently and significantly better results than the baselines in scarce labeled data scenarios – that is, when labeled data is limited or non-existent in the target dataset – achieving Jaccard indices greater than 0.9 in most tasks. Completely Unsupervised Domain Adaptation results were observed to be close to the Fully Supervised Domain Adaptation used in the traditional procedure of fine-tuning pretrained Deep Neural Networks. |
| publishDate |
2020 |
| dc.date.none.fl_str_mv |
2020-07-21 2023-03-29T15:28:13Z 2023-03-29T15:28:13Z 2025-09-09T00:54:11Z |
| dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
| dc.type.driver.fl_str_mv |
info:eu-repo/semantics/doctoralThesis |
| format |
doctoralThesis |
| status_str |
publishedVersion |
| dc.identifier.uri.fl_str_mv |
https://hdl.handle.net/1843/51331 |
| url |
https://hdl.handle.net/1843/51331 |
| dc.language.iso.fl_str_mv |
eng |
| language |
eng |
| dc.rights.driver.fl_str_mv |
http://creativecommons.org/licenses/by/3.0/pt/ info:eu-repo/semantics/openAccess |
| rights_invalid_str_mv |
http://creativecommons.org/licenses/by/3.0/pt/ |
| eu_rights_str_mv |
openAccess |
| dc.format.none.fl_str_mv |
application/pdf |
| dc.publisher.none.fl_str_mv |
Universidade Federal de Minas Gerais |
| publisher.none.fl_str_mv |
Universidade Federal de Minas Gerais |
| dc.source.none.fl_str_mv |
reponame:Repositório Institucional da UFMG instname:Universidade Federal de Minas Gerais (UFMG) instacron:UFMG |
| instname_str |
Universidade Federal de Minas Gerais (UFMG) |
| instacron_str |
UFMG |
| institution |
UFMG |
| reponame_str |
Repositório Institucional da UFMG |
| collection |
Repositório Institucional da UFMG |
| repository.name.fl_str_mv |
Repositório Institucional da UFMG - Universidade Federal de Minas Gerais (UFMG) |
| repository.mail.fl_str_mv |
repositorio@ufmg.br |
| _version_ |
1856414104663621632 |