Enhancing Cross-Modal Medical Image Segmentation through Compositionality and Disentanglement

This repository contains the checkpoints of several disentangled representation learning models for cross-modal medical image segmentation, used in the paper: 'Enhancing Cross-Modal Medical Image Segmentation through Compositionality'. In particular, it contains the checkpoints of our proposed method, where we introduced compositionality into a cross-modal segmentation framework to enhance performance and interpretability, while reducing computational costs.

The checkpoints are trained for MYO, LV, and RV segmentation using the MMWHS dataset in both directions, i.e. with CT and MRI as target domain. Moreover, they are trained for liver parenchyma segmentation using the CHAOS dataset, with both MRI T1 and MRI T2 as target domain.

Please refer to the original GitHub repository for the code.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .