andreped commited on
Commit
0ba045b
·
2 Parent(s): a693ba1 19b6500

Merge branch 'master' of https://github.com/andreped/livermask into master

Browse files
Files changed (1) hide show
  1. README.md +15 -26
README.md CHANGED
@@ -1,41 +1,25 @@
1
  # Automatic liver segmentation in CT using deep learning
2
  [![license](https://img.shields.io/github/license/DAVFoundation/captain-n3m0.svg?style=flat-square)](https://github.com/DAVFoundation/captain-n3m0/blob/master/LICENSE)
3
 
4
- #### Trained U-Net on the LITS dataset is automatically downloaded when running the inference script and can be used as you wish, ENJOY! :)
5
 
6
  <img src="figures/Segmentation_CustusX.PNG" width="70%" height="70%">
7
 
8
- The figure shows a predicted liver mask with the corresponding patient CT in 3DSlicer. It is the Volume-10 from the LITS17 dataset.
9
 
10
- ### Credit
11
- The LITS dataset can be accessible from [here](https://competitions.codalab.org), and the corresponding paper for the challenge from [here](https://arxiv.org/abs/1901.04056). If trained model is used, please consider citing this paper.
12
-
13
- ### Usage:
14
-
15
- 1) Clone repo:
16
- ```
17
- git clone https://github.com/andreped/livermask.git
18
- cd livermask
19
- ```
20
- 2) Create virtual environment and intall dependencies:
21
- ```
22
- virtualenv -ppython3 venv
23
- source venv/bin/activate
24
- pip install -r /path/to/requirements.txt
25
  ```
26
- 3) Run livermask method:
27
- ```
28
- cd livermask
29
- python livermask.py "path_to_ct_nifti.nii" "output_name.nii"
30
  ```
31
 
32
- If you lack any modules after, try installing them through setup.py (could be done instead of using requirements.txt):
 
33
  ```
34
- pip install wheel
35
- python setup.py bdist_wheel
36
  ```
37
 
38
- ### DICOM/NIfTI format
 
 
39
  Pipeline assumes input is in the NIfTI format, and output a binary volume in the same format (.nii).
40
  DICOM can be converted to NIfTI using the CLI [dcm2niix](https://github.com/rordenlab/dcm2niix), as such:
41
  ```
@@ -44,9 +28,14 @@ dcm2niix -s y -m y -d 1 "path_to_CT_folder" "output_name"
44
 
45
  Note that "-d 1" assumed that "path_to_CT_folder" is the folder just before the set of DICOM scans you want to import and convert. This can be removed if you want to convert multiple ones at the same time. It is possible to set "." for "output_name", which in theory should output a file with the same name as the DICOM folder, but that doesn't seem to happen...
46
 
47
- ### Troubleshooting
48
  You might have issues downloading the model when using VPN. If any issues are observed, try to disable VPN and try again.
49
 
 
 
 
 
 
50
  ------
51
 
52
  Made with :heart: and python
 
1
  # Automatic liver segmentation in CT using deep learning
2
  [![license](https://img.shields.io/github/license/DAVFoundation/captain-n3m0.svg?style=flat-square)](https://github.com/DAVFoundation/captain-n3m0/blob/master/LICENSE)
3
 
4
+ #### Pretrained U-Net model is automatically downloaded when running the inference script and can be used as you wish, ENJOY! :)
5
 
6
  <img src="figures/Segmentation_CustusX.PNG" width="70%" height="70%">
7
 
8
+ ## Install
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ```
11
+ pip install git+https://github.com/andreped/livermask.git
 
 
 
12
  ```
13
 
14
+ ## Usage:
15
+
16
  ```
17
+ livermask --input path-to-nifti.nii --output path-to-output-file.nii
 
18
  ```
19
 
20
+ In addition, there is the optional `--cpu` action to disable the GPU (force computations on CPU only) if necessary.
21
+
22
+ ## DICOM/NIfTI format
23
  Pipeline assumes input is in the NIfTI format, and output a binary volume in the same format (.nii).
24
  DICOM can be converted to NIfTI using the CLI [dcm2niix](https://github.com/rordenlab/dcm2niix), as such:
25
  ```
 
28
 
29
  Note that "-d 1" assumed that "path_to_CT_folder" is the folder just before the set of DICOM scans you want to import and convert. This can be removed if you want to convert multiple ones at the same time. It is possible to set "." for "output_name", which in theory should output a file with the same name as the DICOM folder, but that doesn't seem to happen...
30
 
31
+ ## Troubleshooting
32
  You might have issues downloading the model when using VPN. If any issues are observed, try to disable VPN and try again.
33
 
34
+ ## Acknowledgements
35
+ The model was trained on the LITS dataset. The dataset is openly accessible and can be downloaded from [here](https://competitions.codalab.org). If this tool is used, please, consider citing the corresponding [LITS challenge dataset paper](https://arxiv.org/abs/1901.04056).
36
+
37
+ Disclaimer, I have no affiliation with the LITS challenge, the LITS dataset, or the challenge paper. I only wish to provide an open, free-to-use tool that people may find useful :)
38
+
39
  ------
40
 
41
  Made with :heart: and python