Currently the methods used to develop radiation therapy treatment plans for head and neck cancers rely on clinician experience and a small set of universal guidelines which result in inconsistent and variable methods. Data driven support can provide assistance to clinicians by reducing inconsistency associated with treatment planning and provide empirical estimates to minimize the radiation to healthy organs near the tumor. We created a database of DICOM RT objects which stores historical cases and when a new DICOM object is uploaded it will return a set of similar treatment plans to assist the clinician in creating the treatment plan for the current patient. The database works first by extracting features from DICOM RT object to quantitatively compare and evaluate the similarity of cases enabling the system to mine for cases with defined similarity. The feature extraction methods are based on the spatial relationships between the tumors and organs at risk which allows the generation the overlap volume histogram and spatial target similarity which demonstrate the volumetric and locational similarity between the organ at risk and the tumor. It is useful to find cases with similar tumor anatomy because this similarity translates to similarity in radiation dosage. The developed system was applied to three different RT sites, University of California Los Angeles, Technical University at Munich and State University of New York Buffalo; Roswell Park, with a total of 247 cases to evaluate the system for both inter- and intra- institutional best practices and results. Future roadmap will be discussed for correlating outcomes results to the decision support system which will enhance the overall performance and utilization of the decision support system in the RT workflow. In the future, because this database returns similar historical cases to a current one this could be a worthwhile decision support tool for clinicians as they create new radiation therapy treatment plans.
The rise of deep learning (DL) framework and its application in object recognition could benefit image-based medical diagnosis. Since eye is believed to be a window into human health, the application of DL on differentiating abnormal ophthalmic photography (OP) will greatly empower ophthalmologists to relieve their workload for disease screening. In our previous work, we employed ResNet-50 to construct classification model for diabetic retinopathy(DR) within the PACS. In this study, we implemented latest DL object detection and semantic segmentation framework to empower the eye-PACS. Mask R-CNN framework was selected for object detection and instance segmentation of the optic disc (OD) and the macula. Furthermore, Unet framework was utilized for semantic segmentation of retinal vessel pixels from OP. The performance of the segmented results by two frameworks achieved state-of-art efficiency and the segmented results were transmitted to PACS as grayscale softcopy presentation state (GSPS) file. We also developed a prototype for OP quantitative analysis. It’s believed that the implementation of DL framework into the object recognition and analysis on OPs is meaningful and worth further investigation.
The increasing incidence of diabetes mellitus (DM) in modern society has become a serious issue. DM can also lead to several secondary clinical complications. One of these complications is diabetic retinopathy (DR), which is the leading cause of new cases of blindness for adults in the United States. While DR can be treated if screened and caught early in progression, the only currently effective method to detect symptoms of DR in the eyes of DM patients is through the manual analysis of fundus images. Manual analysis of fundus images is time-consuming for ophthalmologists and can reduce access to DR screening in rural areas. Therefore, effective automatic prescreening tools on a cloud-based platform might be a potential solution to that problem. Recently, deep learning (DL) approaches have been shown to have state-of-the-art performance in image analysis tasks. In this study, we established a research PACS for fundus images to view DICOMized and anonymized fundus images. We prototyped a deep learning engine in the PACS server to perform prescreening classification of uploaded fundus images into DR grade. We fine-tuned a deep convolutional neural network (CNN) model pretrained on the ImageNet dataset by using over 30,000 labeled image samples from the public Kaggle Diabetic Retinopathy Detection fundus image dataset6. We linked the PACS repository with the DL engine and demonstrated the output predicted result of DR into the PACS worklist. The initial prescreened result was promising and such applications could have potential as a “second reader” with future CAD development for nextgeneration PACS.
Retinal changes on a fundus image have been found to be related to a series of diseases. The traditional retinal image quantitative features are usually collected by various standalone and proprietary software which results in variabilities in feature extraction and data collection. Based on our previously established web-based imaging informatics platform to view DICOMized and de-identified fundus images, we developed a computer aided detection structured report (CADe SR) to capture some of the quantitative features on fundus images such as arteriole/venule diameter ratio, cup/disc diameter ratio and to record several lesions such as aneurysms, hemorrhages, neovascularization and exudates into different regions based on known research and clinically related templates such as Early Treatment Diabetic Retinopathy Study (ETDRS) 9 Region Map and four Region Map. In this way, the location patterns of the above lesions as well as morphological changes of anatomy structures could be saved in SR for further radiomics research. In addition, an on-line consultation tool was developed to facilitate further discussion among clinicians and researchers regarding any uncertainty of measurements. Compared with the present workflow of utilizing standalone software to obtain quantitative results, qualitative and quantitative data was acquired by the CADe SR directly, which will provide researchers and clinicians the ability to capture findings and will foster future image-based knowledge discovery researches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.