Back to the list
Congress: ECR24
Poster Number: C-21327
Type: EPOS Radiologist (educational)
DOI: 10.26044/ecr2024/C-21327
Authorblock: F. Buemi, C. Giardina, A. Perri, S. Caloggero, A. Celona, N. Sicilia, O. Ventura Spagnolo, F. Galletta, G. Mastroeni; Messina/IT
Disclosures:
Francesco Buemi: Other: supported by the group "Bracco imaging S.p.A"
Claudio Giardina: Other: supported by the group "Bracco imaging S.p.A"
Alessandro Perri: Other: supported by the group "Bracco imaging S.p.A"
Simona Caloggero: Other: supported by the group "Bracco imaging S.p.A"
Antonio Celona: Other: supported by the group "Bracco imaging S.p.A"
Nunziella Sicilia: Other: supported by the group "Bracco imaging S.p.A"
Orazio Ventura Spagnolo: Other: supported by the group "Bracco imaging S.p.A"
Fabio Galletta: Other: supported by the group "Bracco imaging S.p.A"
Giampiero Mastroeni: Other: supported by the group "Bracco imaging S.p.A"
Keywords: Artificial Intelligence, CT, MR, Computer Applications-3D, Computer Applications-Detection, diagnosis, Segmentation, Education and training
Findings and procedure details

Steps to install MONAI label and activate the server

Installation requirements

Table 2: Installation requirements. It is strongly recommended to have a GPU (graphics processing unit) with at least 16 GB of RAM, preferably 24 GB, especially for training. Inference can also be conducted with a CPU (central processing unit), although it may become impossible for complex models, especially those trained on large datasets, which require high computational power.

MONAI label can be installed through three methods: from PyPI, GitHub, and DockerHub.

Table 3: Installation methods. MONAI label can be installed through three methods: from PyPI, GitHub, and DockerHub.

PyPI (Python Package Index), is an online repository for sharing and distributing Python software packages, allowing developers to easily publish, discover, and install Python libraries and modules using the pip tool. We have installed MONAI label from PyPI, using the following steps.

 

1) Install Anaconda

Anaconda is an open-source software that makes it easier for people to work with Python, especially in the fields of data science, machine learning, and scientific computing. It includes a helpful program called Conda, which makes it simple to install and organize different software packages. It can be downloaded at https://www.anaconda.com/download.

 

2) Open the Anaconda Prompt

a) After starting the Anaconda prompt, create a new virtual environment named "monailabel-env" with Python. 

conda create -n monailabel-env python= xx 

xx= your Python version, e.g. 3.9

b) Update the Python packages pip, setuptools, and wheel to the latest available version

python -m pip install --upgrade pip setuptools wheel

c) Install the latest stable version for Pytorch

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cuxxx

cuxxx= your CUDA version, e.g. cu121

d) Check if CUDA enabled

python -c "import torch; print(torch.cuda.is_available())"

It will return "True" if your Python environment supports GPU acceleration through PyTorch and you have an available GPU, otherwise "False"

 

3) Install MONAI label

Activate the virtual environment and install it:

conda activate monailabel-env

pip install monailabel

 

4) Download apps

Download and install the radiology app type:

monailabel apps --download --name radiology --output apps

Now let's download and install the Bundle app to the same folder we downloaded the radiology app:

monailabel apps --download --name monaibundle --output apps

Fig 10: The radiology and bundle apps are saved in the "apps" folder, located in the path: Local Disk (C:)/Users/(User name)/apps.

Please note: To date, to enable the functionality of the bundle app and use MONAI zoo models, it is required to install version 1.2.0 of MONAI.  If a different version is installed, it will first be necessary to uninstall it using the command:

pip uninstall monai

Subsequently, one can proceed with the installation of version 1.2.0 using the command:

pip install monai==1.2.0

 

5) Download datasets

Typing "monailabel datasets" in the anaconda prompt one can see all datasets available as part of the medical segmentation decathlon.

Fig 11: Typing the command "monailabel datasets" all datasets available as part of the medical segmentation decathlon are enlisted.

The Medical Segmentation Decathlon represents an international biomedical image analysis challenge in which algorithms for medical image segmentation compete in a multitude of both tasks and modalities (http://medicaldecathlon.com/) [2]. The competition involved ten distinct datasets, each presenting unique and challenging characteristics.

To download the datasets, use the following command:

monailabel datasets --download --name xxx --output datasets

xxx: the name of the dataset, e.g. Task01_BrainTumour

The datasets are downloaded in a corresponding folder, located at Local Disk (C:)/Users/(User name)/datasets. In this folder, one can view and organize all the datasets. Please note that the format used for images is the compressed NIFTI (.nii.gz). 

Fig 12: Example of how a dataset is organized (in this case the "Task09_Spleen folder is shown). Typically, a dataset consists of both training and testing data. The "images Tr" folder contains training images, while the "images Ts" folder contains testing images. The labels are stored in the "labels Tr" folder. Additionally, the "dataset.json" file contains information about the dataset.

 

6) Activate the server

monailabel start_server --app apps/xxx1 --studies datasets/xxx2/imagesTr --conf models xxx3

Through this command, it can be chosen to load:

xxx1: the app "radiology" or "monaibundle"

xxx2: the dataset on which is going to work (e.g. type "Task10_Colon"). 

xxx3: the model one intends to use (type for example “deepedit or "segmentation_spleen”. One can load all models simply by typing “all”.

 

7) Install 3D-Slicer and MONAI label in the extensions manager

a) Download 3D-Slicer at https://www.slicer.org/ and install it.

b) Install Monailabel in the extensions manager of 3D-Slicer

Fig 13: The extension manager of 3D-Slicer with MONAI label.

 

8) Start using 3D-Slicer and MONAI label

Launch 3D-Slicer and navigate to Modules > Active Learning to access the MONAI Label module.

After loading the server, the MONAI Label module provides the following information:

In the top panel:

Fig 14: The top panel of MONAI label in 3D-Slicer (MONAI label server, app name, source volume and options).

Fig 15: The top panel of MONAI label in 3D-Slicer (Active learning).

 

In the bottom panel:

Fig 16: The bottom panel of MONAI label in 3D-Slicer (Segment editor, autosegmentation, Smart edit/Deepgrow and Scribbles).

 

9) Start using the Radiology and Bundle apps

 

Radiology app

Following the steps above, it is possible to start the segmentation using interactive approaches or autosegmentation. Below, we will show in more detail how to perform segmentation using Deepgrow, Deepedit, and Scribbles.

Fig 17: Examples of how to perform segmentation with Deepedit and Deepgrow. In Deepedit mode, simply place the landmark on the organ of interest to obtain segmentation. With Deepgrow, on the other hand, segmentation is achieved by placing the cursor on "foreground" for positive clicks and "background" for negative clicks.

Fig 18: Example of how to perform segmentation with scribbles. In the first phase, one must select the ROI scribbles of interest (in this case, the liver). Then, proceed within the ROI to scribble in foreground and background modes.
     

 

Bundle app

By launching the bundle app, one can utilize the models available on MONAI model Zoo website.

Fig 19: Wholebrainseg large UNest segmentation model. The model, based on UNest, is trained to segment 133 structures, each of which is individually listed in the Segment Editor. The final result is partially degraded by background noise, which has been removed using the "Erase" tool in the Segment Editor.

Fig 20: Pancreas ct dints segmentation model. This model, through the U-Net architecture, is capable of performing volumetric segmentation of the pancreas and pancreatic tumors in the portal phase.

Fig 21: Lung nodule ct detection model. The model performed well in this case, although nipples were mistakenly identified as nodules.

Fig 22: Renal structures UNest segmentation. The model is trained to recognize the renal cortex, medulla, and collecting system. It is a model based on UNest and trained on arterial and portal phase CT images.

Fig 23: WholeBody ct segmentation. This model is based on U-Net and can label up to 104 whole-body segments.

 

10) Submit the labels

After completing the segmentation, it is necessary to submit the labeled data, after which the training can start. Moreover, it is possible to proceed to the next sample even while the training is ongoing.

Fig 24: The three subsequent stages after completing the segmentation are: 1) Submit the label, 2) Train, and 3) Next sample.

 

11) Train

Throughout the training process, observing the events occurring in the anaconda prompt and within the active learning module of 3D-Slicer is interesting. In the terminal, it's possible to assess the model metrics, while in 3D-Slicer, one can monitor the progress of status and accuracy bars.

Fig 25: The active learning module during training. As the training progresses, the accuracy of the model should improve.

Table 4: The key metrics during the training, in particular the epoch, train loss, dice, mean dice, simulated clicks, and best value.

 

Active learning

MONAI label employs Active Learning to choose the most relevant data for model training. A radiologist labels a volume and incorporates it into the training pool. Subsequently, a machine learning model undergoes training with the updated annotation, resulting in the generation of a new model. This model facilitates uncertainty estimation, which is then employed to prioritize the unlabeled volumes, selecting a subset of the most uncertain ones. The identified uncertain data is then labeled by the radiologist and added to the training pool. This cycle can be repeated multiple times until a model with the desired performance is obtained [2, 5].

Fig 26: The active learning cycle (from the article by Diaz-Pinto A. et al (2023) MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images. arXiv:2203.12362 [cs.HC]).

 

GALLERY