
One of the most significant challenges that practicing radiologists face when adopting Al is how to integrate it into their daily practice and contribute to its development. Despite the availability of numerous software, web-based platforms, and services, they are often costly and out of reach for financially constrained healthcare institutions. Conversely, open-source tools are frequently challenging to utilize due to their requirement for knowledge that extends beyond the typical expertise of radiologists. Furthermore, segmentation tasks frequently entail laborious and time-consuming processes. MONAI Label is an open-source and freely available tool for annotating radiology datasets. It offers the distinct advantage of integration into graphical user interface (GUI) software like 3D-Slicer [1] or the web-based OHIF viewer (Open Health Imaging Foundation) [2]. In this educational poster, we will show how to start using MONAI label with 3D-Slicer.
MONAI label applications
MONAI label can be used with different applications, which are not only restricted to radiology but employed in other fields. Currently, four distinct applications are available:
- Radiology
- Bundle
- Pathology
- Video
Only the radiology and bundle applications will be discussed, as pathology and video applications are respectively useful for pathologists and endoscopists.
Radiology app
The radiology app offers two types of segmentation:
- Interactive
- Autosegmentation
Interactive segmentation
Interactive segmentation includes three annotation approaches:
- Deepgrow: This approach relies on positive and negative interactions. Positive clicks from users contribute to expanding the segmentation, incorporating the selected location into the segmentation label. Conversely, negative clicks serve to exclude a particular region from the area of interest [2]. With Deepgrow 2D, users can annotate images slice by slice, whereas Deepgrow 3D offers the capability to annotate entire volumes.Fig 2: Deepgrow relies on positive and negative interactions. Positive clicks from users contribute to expanding the segmentation, incorporating the selected location into the segmentation label. Conversely, negative clicks serve to exclude a particular region from the area of interest (from the article by Diaz-Pinto A. et al (2023) MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images. arXiv:2203.12362 [cs.HC]).Fig 3: Example of Deepgrow. Positive and negative clicks are provided to refine the quality of the segmentation.
- Deepedit: Deepedit enhances Deepgrow's segmentation by incorporating a two-stage process. In the initial non-interactive stage, the segmentation is generated through an automated segmentation process (i.e. inference such as U-Net) without the need for user clicks. Subsequently, in the interactive stage, users provide clicks, similar to Deepgrow [2, 3]. Fig 4: Deepedit enhances Deepgrow's segmentation by incorporating a two-stage process. In the initial non-interactive stage (automatic segmentation mode), the segmentation is generated through an automated segmentation process (i.e. inference such as U-Net) without the need for user clicks. Subsequently, in the interactive stage (or Deepgrow mode), users provide clicks, similar to Deepgrow (from the article by Diaz-Pinto A. et al (2023) MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images. arXiv:2203.12362 [cs.HC])..Fig 5: Example of Deepedit combined with Deepgrow. After the first click, both kidneys are segmented. In this case, the segmentation of the right kidney has been refined with Deepgrow.
- Scribbles: Scribbles-based segmentation model enables interactive segmentation through free-hand drawings, specifically foreground (FG) or background (BG) scribbles [2]. Fig 6: Scribbles-based segmentation model enables interactive segmentation through free-hand drawings, specifically foreground (FG) or background (BG) scribbles. In the so-called “likelihood inference stage”, scribbles can be used either to generate segmentation labels (“scribbles-based online likelihood segmentation”) or to improve segmentations from a deep learning model by incorporating user-scribbles (“scribbles-based CNN segmentation refinement”). Subsequently, in the “Segmentation Refinement Stage” the initial segmentation is refined using an energy optimization approach (from the article by Diaz-Pinto A. et al (2023) MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images. arXiv:2203.12362 [cs.HC]).Fig 7: Scribbles applied in the segmentation of the liver. The foreground and background scribbles are drawn in all three planes.
Autosegmentation
Autosegmentation is based on a standard convolutional neural network (CNN), such as U-Net, without any interaction by the user [2].
The autosegmentation module can be viewed as an easy way to conduct inference, allowing for the assessment of model accuracy either during or after the training.
Bundle app
The Bundle app offers pre-trained models for tasks such as inference, training, and pre/post-processing of diverse anatomical targets, through integration with MONAI Model Zoo (https://monai.io/model-zoo.html). A comprehensive and updated list of the models for radiology is available on the Model Zoo website.