Back to the list
Congress: ECR25
Poster Number: C-17526
Type: Poster: EPOS Radiologist (scientific)
Authorblock: D. Rosewarne1, R. Marlow2, M. Butt1, q. wadood1, F. Zaman1, J. G. Zachariah1, A. Jebril1, H. Bilal1; 1Wolverhampton/UK, 2Crewe/UK
Disclosures:
David Rosewarne: Nothing to disclose
Roger Marlow: Nothing to disclose
Mahreen Butt: Nothing to disclose
qasim wadood: Nothing to disclose
Fahad Zaman: Nothing to disclose
Jiju George Zachariah: Nothing to disclose
Asma Jebril: Nothing to disclose
Hasan Bilal: Nothing to disclose
Keywords: Computer applications, PACS, Computer Applications-General, Economics
Methods and materials

Analagously to increasing human obesity, our data stores are becoming heavier and hungrier by the day. Unlike the case for humans, this is happening largely invisibly in data-centres around the world. YouTube, TikTok, Instagram, WhatsApp and X – for example – are growing in content exponentially, and more rapidly than is medical imaging. All are unsustainable and there is widespread discussion about the growing use of electricity and water for data storage and processing, which includes AI.

We are very used to dealing in gigabytes (GB) and megabytes (MB) for our files, and when we buy storage we do so typically in terabytes (TB). Note the equivalences 1 TB = 1 000 GB = 1 000 000 MB. In this discussion we need to call on the larger storage units the petabyte (PB) and the exabyte (EB). Here the equivalences are 1 EB = 1000 PB = 1 000 000 TB. We will not quite need the zettabyte (ZB). 1 ZB = 1000 EB

For context, a typical single moderate resolution camera image might need a megabyte for storage and a film download a few gigabytes. A PC hard disk might offer 1 terabyte of storage. YouTube accepts roughly a million hours of video daily, placing its daily archiving need in the petabyte range. With backups this places yearly needs to an exabyte. Adding other social media channels and assuming that most of this data has been generated in the past decade places the social media burden at least on the 100 exabyte (0.1 zettabyte) scale.

Turning to our institution, a collation of all studies sent to PACS was performed for the same week in February of 2009, 2014, 2019, and 2024. A sample of 200 studies was taken for each year.

The study list was recorded on a shared spreadsheet and individual radiology colleagues exported single studies from the PACS archive, selecting only the DICOM data for the study (ignoring proprietary data format options and the option of having a DICOM viewer along with the study).

These studies were compressed using Windows Zip - taken as a general purpose utility that would provide a lower bound on the degree of compression that might be obtained with an algorithm optimised for medical imaging. Note that Zip performs lossless compression, so that a faithful copy of the original can be obtained when the file is reinflated. Contrast this with lossy formats such as JPEG where compression results in data loss.

The raw study sizes and the compressed file sizes were recorded on the spreadsheet and provided the raw data for analysis.

GALLERY