Presentation + Paper
10 April 2023 Radiology report generation using transformers conditioned with non-imaging data
Author Affiliations +
Abstract
Medical image interpretation is central to most clinical applications such as disease diagnosis, treatment planning, and prognostication. In clinical practice, radiologists examine medical images (e.g. chest x-rays, computed tomography images, etc.) and manually compile their findings into reports, which can be a time-consuming process. Automated approaches to radiology report generation, therefore, can reduce radiologist workload and improve efficiency in the clinical pathway. While recent deep-learning approaches for automated report generation from medical images have seen some success, most studies have relied on image-derived features alone, ignoring non-imaging patient data. Although a few studies have included the word-level contexts along with the image, the use of patient demographics is still unexplored. On the other hand, prior approaches to this task commonly use encoder-decoder frameworks that consist of a convolution vision model followed by a recurrent language model. Although recurrent-based text generators have achieved noteworthy results, they had the drawback of having a limited reference window and identifying only one part of the image while generating the next word. This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information, to synthesise patient-specific radiology reports. The proposed network uses a convolutional neural network (CNN) to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information, to synthesise full-text radiology reports. The designed network not only alleviates the limitations of the recurrent models but also improves the encoding and generative processes by including more context in the network. Data from two public databases were used to train and evaluate the proposed approach. CXRs and reports were extracted from the MIMIC-CXR database and combined with corresponding patients’ data (gender, age, and ethnicity) from MIMIC-IV. Based on the evaluation metrics used (BLEU 1-4 and BERTScore), including patient demographic information was found to improve the quality of reports generated using the proposed approach, relative to a baseline network trained using CXRs alone. The proposed approach shows potential for enhancing radiology report generation by leveraging rich patient metadata and combining semantic text embeddings derived thereof, with medical image-derived visual features.
Conference Presentation
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Nurbanu Aksoy, Nishant Ravikumar, and Alejandro F. Frangi "Radiology report generation using transformers conditioned with non-imaging data", Proc. SPIE 12469, Medical Imaging 2023: Imaging Informatics for Healthcare, Research, and Applications, 124690O (10 April 2023); https://doi.org/10.1117/12.2653672
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Transformers

Visualization

Chest imaging

Radiology

Back to Top