Update README.md
Browse files
README.md
CHANGED
@@ -39,16 +39,17 @@ Each model provides multiple feature extraction layers, which can be selected in
|
|
39 |
|
40 |
In addition, the repository also includes:
|
41 |
|
42 |
-
- **MIND** — A handcrafted
|
43 |
|
44 |
|
45 |
-
| Model | Specialization | Paper / Reference | Field of View | License |
|
46 |
-
|
47 |
-
| **MIND** | Handcrafted descriptor | [Heinrich et al., 2012](https://doi.org/10.1016/j.media.2012.05.008) | `
|
48 |
-
| **SAM2.1** | General segmentation (natural images) | [Ravi et al., 2023](https://arxiv.org/abs/2408.00714)
|
49 |
-
| **TS Models** |
|
50 |
-
| **Anatomix** | Anatomy-aware transformer encoder | [Dey et al., 2024](https://arxiv.org/abs/2411.02372)
|
51 |
-
| **DINOv2** | Self-supervised vision transformer | [Oquab et al., 2023](https://arxiv.org/abs/2304.07193)
|
|
|
52 |
|
53 |
---
|
54 |
|
|
|
39 |
|
40 |
In addition, the repository also includes:
|
41 |
|
42 |
+
- **MIND** — A handcrafted descriptor, wrapped in TorchScript
|
43 |
|
44 |
|
45 |
+
| Model | Specialization | Paper / Reference | Field of View | License | Preprocessing |
|
46 |
+
|----------------|---------------------------------------|-------------------------------------------------------------|------------------------|--------------|---------------|
|
47 |
+
| **MIND** | Handcrafted descriptor | [Heinrich et al., 2012](https://doi.org/10.1016/j.media.2012.05.008) | `2*r*d + 1` (r: radius, d: dilation) | Apache 2.0 | None |
|
48 |
+
| **SAM2.1** | General segmentation (natural images) | [Ravi et al., 2023](https://arxiv.org/abs/2408.00714) | 29 | Apache 2.0 | Normalize intensities to [0, 1], then standardize with mean 0.485 and std 0.229 |
|
49 |
+
| **TS Models** | CT/MRI segmentation | [Wasserthal et al., 2022](https://arxiv.org/abs/2208.05868) | `2^l + 3` (l: layer number) | Apache 2.0 | Canonical orientation for all models. For MRI models (e.g., TS/M730–M733), standardize intensities to zero mean and unit variance. For CT models (e.g., TS/M258, TS/M291), clip intensities to [-1024, 276] HU, then normalize by centering at -370 HU and scaling by 436.6.|
|
50 |
+
| **Anatomix** | Anatomy-aware transformer encoder | [Dey et al., 2024](https://arxiv.org/abs/2411.02372) | Global(Static mode) | MIT | Normalize intensities to [0, 1] |
|
51 |
+
| **DINOv2** | Self-supervised vision transformer | [Oquab et al., 2023](https://arxiv.org/abs/2304.07193) | 14 | Apache 2.0 | Normalize intensities to [0, 1], then standardize with mean 0.485 and std 0.229 |
|
52 |
+
|
53 |
|
54 |
---
|
55 |
|