Back to the list
Congress: ECR24
Poster Number: C-21327
Type: EPOS Radiologist (educational)
DOI: 10.26044/ecr2024/C-21327
Authorblock: F. Buemi, C. Giardina, A. Perri, S. Caloggero, A. Celona, N. Sicilia, O. Ventura Spagnolo, F. Galletta, G. Mastroeni; Messina/IT
Disclosures:
Francesco Buemi: Other: supported by the group "Bracco imaging S.p.A"
Claudio Giardina: Other: supported by the group "Bracco imaging S.p.A"
Alessandro Perri: Other: supported by the group "Bracco imaging S.p.A"
Simona Caloggero: Other: supported by the group "Bracco imaging S.p.A"
Antonio Celona: Other: supported by the group "Bracco imaging S.p.A"
Nunziella Sicilia: Other: supported by the group "Bracco imaging S.p.A"
Orazio Ventura Spagnolo: Other: supported by the group "Bracco imaging S.p.A"
Fabio Galletta: Other: supported by the group "Bracco imaging S.p.A"
Giampiero Mastroeni: Other: supported by the group "Bracco imaging S.p.A"
Keywords: Artificial Intelligence, CT, MR, Computer Applications-3D, Computer Applications-Detection, diagnosis, Segmentation, Education and training
Background

One of the most significant challenges that practicing radiologists face when adopting Al is how to integrate it into their daily practice and contribute to its development. Despite the availability of numerous software, web-based platforms, and services, they are often costly and out of reach for financially constrained healthcare institutions. Conversely, open-source tools are frequently challenging to utilize due to their requirement for knowledge that extends beyond the typical expertise of radiologists. Furthermore, segmentation tasks frequently entail laborious and time-consuming processes. MONAI Label is an open-source and freely available tool for annotating radiology datasets. It offers the distinct advantage of integration into graphical user interface (GUI) software like 3D-Slicer [1] or the web-based OHIF viewer (Open Health Imaging Foundation) [2]. In this educational poster, we will show how to start using MONAI label with 3D-Slicer.

Fig 1: MONAI Label is structured around three primary high-level modules: the client, server, and database/datastores. MONAI Label supports diverse graphical user interfaces (GUIs), including 3D-Slicer/OHIF, for the visualization and annotation of data. Moreover, it employs AI-driven interactive and non-interactive annotation techniques (from the article by Diaz-Pinto A. et al (2023) MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images. arXiv:2203.12362 [cs.HC]).

 

MONAI label applications

MONAI label can be used with different applications, which are not only restricted to radiology but employed in other fields. Currently, four distinct applications are available:

  • Radiology
  • Bundle
  • Pathology
  • Video

Only the radiology and bundle applications will be discussed, as pathology and video applications are respectively useful for pathologists and endoscopists. 

 

Radiology app

The radiology app offers two types of segmentation:

  • Interactive
  • Autosegmentation

Interactive segmentation

Interactive segmentation includes three annotation approaches:

  1. Deepgrow: This approach relies on positive and negative interactions. Positive clicks from users contribute to expanding the segmentation, incorporating the selected location into the segmentation label. Conversely, negative clicks serve to exclude a particular region from the area of interest [2]. With Deepgrow 2D, users can annotate images slice by slice, whereas Deepgrow 3D offers the capability to annotate entire volumes.
    Fig 2: Deepgrow relies on positive and negative interactions. Positive clicks from users contribute to expanding the segmentation, incorporating the selected location into the segmentation label. Conversely, negative clicks serve to exclude a particular region from the area of interest (from the article by Diaz-Pinto A. et al (2023) MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images. arXiv:2203.12362 [cs.HC]).
    Fig 3: Example of Deepgrow. Positive and negative clicks are provided to refine the quality of the segmentation.
     
  2. Deepedit: Deepedit enhances Deepgrow's segmentation by incorporating a two-stage process. In the initial non-interactive stage, the segmentation is generated through an automated segmentation process (i.e. inference such as U-Net) without the need for user clicks. Subsequently, in the interactive stage, users provide clicks, similar to Deepgrow [2, 3].
    Fig 4: Deepedit enhances Deepgrow's segmentation by incorporating a two-stage process. In the initial non-interactive stage (automatic segmentation mode), the segmentation is generated through an automated segmentation process (i.e. inference such as U-Net) without the need for user clicks. Subsequently, in the interactive stage (or Deepgrow mode), users provide clicks, similar to Deepgrow (from the article by Diaz-Pinto A. et al (2023) MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images. arXiv:2203.12362 [cs.HC]).
    Fig 5: Example of Deepedit combined with Deepgrow. After the first click, both kidneys are segmented. In this case, the segmentation of the right kidney has been refined with Deepgrow.
    .
  3. Scribbles: Scribbles-based segmentation model enables interactive segmentation through free-hand drawings, specifically foreground (FG) or background (BG) scribbles [2].
    Fig 6: Scribbles-based segmentation model enables interactive segmentation through free-hand drawings, specifically foreground (FG) or background (BG) scribbles. In the so-called “likelihood inference stage”, scribbles can be used either to generate segmentation labels (“scribbles-based online likelihood segmentation”) or to improve segmentations from a deep learning model by incorporating user-scribbles (“scribbles-based CNN segmentation refinement”). Subsequently, in the “Segmentation Refinement Stage” the initial segmentation is refined using an energy optimization approach (from the article by Diaz-Pinto A. et al (2023) MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images. arXiv:2203.12362 [cs.HC]).
    Fig 7: Scribbles applied in the segmentation of the liver. The foreground and background scribbles are drawn in all three planes.
     

Autosegmentation

Autosegmentation is based on a standard convolutional neural network (CNN), such as U-Net, without any interaction by the user [2].

Fig 8: Example of auto-segmentation using the Prostate MRI anatomy model of MONAI zoo, trained with the U-Net architecture. Just click "Run" to obtain the segmentation of the peripheral and transitional zone of the prostate.
U-Net is a CCN architecture designed for image segmentation tasks [4].
Fig 9: The name "U-Net" derives from its U-shaped architecture, which consists of a contracting path, a bottleneck, and an expanding path. 1) Contracting Path: Utilizes downsampling to extract contextual information from a broad range of image pixels. 2) Bottleneck: Marks the transition from the contracting to expanding paths, enhancing model generalization. 3) Expanding Path: Implements upsampling to combine feature maps and generate final class information. This architecture efficiently captures both contextual and positional details, making it well-suited for tasks like image segmentation (Modified figure from the article by Ronneberger et al (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv:1505.04597 [cs.CV]).
MONAI label allows the user to test other different kinds of CCN, such as UNesT or DynUNet.

The autosegmentation module can be viewed as an easy way to conduct inference, allowing for the assessment of model accuracy either during or after the training.

 

Bundle app

The Bundle app offers pre-trained models for tasks such as inference, training, and pre/post-processing of diverse anatomical targets, through integration with MONAI Model Zoo (https://monai.io/model-zoo.html). A comprehensive and updated list of the models for radiology is available on the Model Zoo website.

Table 1: Some radiology models available on the MONAI model Zoo website.

GALLERY