Deep radiomics employs deep learning to extract high-dimensional imaging features that capture complex and nonlinear patterns that cannot be captured by traditional handcrafted radiomic features. Convolutional neural networks (CNNs), transformers, and hybrid models enable automated feature learning, thus eliminating the need for predefined mathematical formulas and enhancing the ability to identify disease-specific imaging biomarkers. However, the implementation of deep radiomics is hindered by several challenges, including the selection of optimal network architectures, the paucity of domain-specific pretrained models, and the limited availability of large-scale annotated medical imaging datasets. A significant barrier to clinical adoption is the black-box nature of deep learning models, which complicates interpretability and trust in AI-generated predictions. To address this, explainability techniques such as Grad-CAM, SHAP, and LIME have been developed, allowing visualization and quantification of the most relevant imaging features driving model decisions. These tools are essential for validating AI outputs and ensuring their alignment with radiological expertise. While deep radiomics has shown promise in early studies, its widespread clinical implementation requires overcoming technical and practical barriers. Radiologists have a pivotal role in this process, as they can validate deep features, ensure model transparency, and integrate AI-driven insights into clinical workflows. As deep radiomics continues to evolve, interdisciplinary collaboration, standardization efforts, and advancements in AI literacy will be key to unlocking its full potential in precision medicine.

Deep radiomics

Giannini V.
2025-01-01

Abstract

Deep radiomics employs deep learning to extract high-dimensional imaging features that capture complex and nonlinear patterns that cannot be captured by traditional handcrafted radiomic features. Convolutional neural networks (CNNs), transformers, and hybrid models enable automated feature learning, thus eliminating the need for predefined mathematical formulas and enhancing the ability to identify disease-specific imaging biomarkers. However, the implementation of deep radiomics is hindered by several challenges, including the selection of optimal network architectures, the paucity of domain-specific pretrained models, and the limited availability of large-scale annotated medical imaging datasets. A significant barrier to clinical adoption is the black-box nature of deep learning models, which complicates interpretability and trust in AI-generated predictions. To address this, explainability techniques such as Grad-CAM, SHAP, and LIME have been developed, allowing visualization and quantification of the most relevant imaging features driving model decisions. These tools are essential for validating AI outputs and ensuring their alignment with radiological expertise. While deep radiomics has shown promise in early studies, its widespread clinical implementation requires overcoming technical and practical barriers. Radiologists have a pivotal role in this process, as they can validate deep features, ensure model transparency, and integrate AI-driven insights into clinical workflows. As deep radiomics continues to evolve, interdisciplinary collaboration, standardization efforts, and advancements in AI literacy will be key to unlocking its full potential in precision medicine.
2025
Methodology in Radiomics: Step-by-step Guide in Radiomics Pipeline
Elsevier
145
152
9780443292422
Deep learning; diagnostic imaging; explainability; medical Imaging; trustwothiness
Giannini V.
File in questo prodotto:
File Dimensione Formato  
Chapter 8: Deep radiomics - Proof Central.pdf

Accesso riservato

Dimensione 382.49 kB
Formato Adobe PDF
382.49 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/2117530
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact