File size: 4,043 Bytes
48f6963
 
 
 
 
 
 
 
 
 
 
 
 
e00ba49
48f6963
e00ba49
48f6963
 
806be59
48f6963
 
e00ba49
 
 
 
 
48f6963
 
e00ba49
 
48f6963
 
 
 
 
 
 
 
 
 
 
f315bc9
48f6963
 
f315bc9
 
 
 
 
 
 
 
48f6963
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
license: apache-2.0
tags:
  - medical-imaging
  - image-registration
  - torchscript
  - impact
  - pretrained
  - segmentation
---

# 🧠 TorchScript Models for the IMPACT Semantic Similarity Metric

This repository provides a collection of **TorchScript-exported pretrained models** designed for use with the **IMPACT** similarity metric, enabling semantic medical image registration through feature-level comparison.

The IMPACT metric is introduced in the following preprint, currently under review:

> **IMPACT: A Generic Semantic Loss for Multimodal Medical Image Registration**  
> *V. Boussot, C. Hémon, J.-C. Nunes, J. Dowling, S. Rouzé, C. Lafond, A. Barateau, J.-L. Dillenseger*  
> [arXiv:2503.24121 [cs.CV]](https://arxiv.org/abs/2503.24121)

🔧 The full implementation of IMPACT, along with its integration into the **Elastix** framework, is available in the repository:  
➡️ [github.com/vboussot/ImpactLoss](https://github.com/vboussot/ImpactLoss)

This repository also includes example parameter maps, TorchScript model handling utilities, and a ready-to-use Docker environment for quick experimentation and reproducibility.

---

## 📚 Pretrained Model 

The TorchScript models provided in this repository were exported from publicly available pretrained networks. These include:

- **TotalSegmentator (TS)** — U-Net models trained for full-body anatomical segmentation  
- **Segment Anything 2.1 (SAM2.1)** — Foundation model for segmentation on natural images  
- **DINOv2** — Self-supervised vision transformer trained on diverse datasets  
- **Anatomix** — Transformer-based model with anatomical priors for medical images  

Each model provides multiple feature extraction layers, which can be selected independently using the corresponding model l_Layers. This can be configured through the LayerMask parameter in the IMPACT configuration.

In addition, the repository also includes:

- **MIND** — A handcrafted descriptor, wrapped in TorchScript


| Model          | Specialization                        | Paper / Reference                                           | Field of View         | License      | Preprocessing |
|----------------|---------------------------------------|-------------------------------------------------------------|------------------------|--------------|---------------|
| **MIND**       | Handcrafted descriptor                | [Heinrich et al., 2012](https://doi.org/10.1016/j.media.2012.05.008) | `2*r*d + 1` (r: radius, d: dilation)               | Apache 2.0 | None |
| **SAM2.1**     | General segmentation (natural images) | [Ravi et al., 2023](https://arxiv.org/abs/2408.00714)       | 29                    | Apache 2.0           | Normalize intensities to [0, 1], then standardize with mean 0.485 and std 0.229 |
| **TS Models**  | CT/MRI segmentation                   | [Wasserthal et al., 2022](https://arxiv.org/abs/2208.05868)  | `2^l + 3` (l: layer number)              | Apache 2.0    | Canonical orientation for all models. For MRI models (e.g., TS/M730–M733), standardize intensities to zero mean and unit variance. For CT models (e.g., TS/M258, TS/M291), clip intensities to [-1024, 276] HU, then normalize by centering at -370 HU and scaling by 436.6.|
| **Anatomix**   | Anatomy-aware transformer encoder     | [Dey et al., 2024](https://arxiv.org/abs/2411.02372)        | Global(Static mode)          | MIT           | Normalize intensities to [0, 1] |
| **DINOv2**     | Self-supervised vision transformer    | [Oquab et al., 2023](https://arxiv.org/abs/2304.07193)       | 14     | Apache 2.0           | Normalize intensities to [0, 1], then standardize with mean 0.485 and std 0.229 |


---

### 🔍 TS Model Variants

**TS Models** refer to the following TotalSegmentator-derived TorchScript models:  
`M258, M291, M293, M294, M295, M297, M298, M730, M731, M732, M733, M850, M851`

Each model is specialized for a specific anatomical structure or resolution (e.g., 3mm / 6mm) and shares the same encoder-decoder architecture.  

---