Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Enhancing Continual Abdominal Multi-Organ and Tumor Segmentation through Best Model Knowledge Distillation and Encoder-Decoder Training
0
Zitationen
4
Autoren
2024
Jahr
Abstract
Continual learning is crucial for medical image segmentation tasks, as it enables the extension of recognition capabilities to accommodate the expansion of data and classes without compromising privacy and security. Motivated by the need for more efficient and effective continual learning models in medical image segmentation, we propose a novel model that builds upon the work of Zhang et al. (2023) while introducing notable improvements. Our model utilizes two novel techniques to enhance continual learning performance. First, we aggregate additional information from the decoder during the model training phase to improve the model’s representational capacity. Second, we propose an effective knowledge distillation method that leverages the knowledge from previously trained models to generate pseudo-labels for new data, thereby eliminating the need to access old data. This approach effectively mitigates catastrophic forgetting while complying with privacy and security constraints. We evaluate our proposed method on the benchmark BTCV dataset and compare its performance against several baseline methods. The experimental results demonstrate that our approach outperforms some of the baseline methods in certain instances, showcasing the effectiveness of our dynamic architecture expansion, decoder information aggregation, and knowledge distillation techniques. Our findings pave the way for further research in developing efficient and effective continual learning methods for medical image segmentation.