Building Machine Learning models from scratch for clinical applications can be a challenging undertaking requiring varied levels of expertise. Given the heterogeneous nature of input data and specific task requirements, even seasoned developers and researchers may occasionally run into issues with incompatible frameworks. This is further complicated in the context of diagnostic radiology. Therefore, we developed the CRP10 AI Application Interface (CRP10AII) as a component of the Medical Imaging and Data Resource Center (MIDRC) to deliver a modular and user-friendly software solution that can efficiently address the demands of physicians, early AI developers to explore, train, and test AI algorithms. The CRP10AII tool is python-based web framework that is connected to the data commons (GEN3) that offers the ability to develop AI models from scratch or employ pre-trained models while allowing for visualization and interpretation of the predictions of the AI model. Here, we evaluate the capabilities of CRP10AII and its related human-API interaction factors. This evaluation aims at investigating various aspects of the API, including:(i) robustness and ease of use; (ii) visualization help in decision making tasks; and (iii) necessary further improvements for initial AI researchers with different medical imaging and AI expertise levels. Users initially experienced trouble testing the API; however, the problems have since been fixed as a result of additional explanations. The user evaluation's findings demonstrate that although different options on the API are generally easy to understand, use, and helpful in decision-making tasks for users with and without experience in medical imaging and AI, there are differences in how the various options are understood and used by users. We were also able to collect additional inputs, such as increasing information fields and including more interactive components to make the API more generalizable and customizable.
The coronavirus disease 2019 (COVID-19) pandemic has wreaked havoc across the world. It also created a need for the urgent development of efficacious predictive diagnostics, specifically, artificial intelligence (AI) methods applied to medical imaging. This has led to the convergence of experts from multiple disciplines to solve this global pandemic including clinicians, medical physicists, imaging scientists, computer scientists, and informatics experts to bring to bear the best of these fields for solving the challenges of the COVID-19 pandemic. However, such a convergence over a very brief period of time has had unintended consequences and created its own challenges. As part of Medical Imaging Data and Resource Center initiative, we discuss the lessons learned from career transitions across the three involved disciplines (radiology, medical imaging physics, and computer science) and draw recommendations based on these experiences by analyzing the challenges associated with each of the three associated transition types: (1) AI of non-imaging data to AI of medical imaging data, (2) medical imaging clinician to AI of medical imaging, and (3) AI of medical imaging to AI of COVID-19 imaging. The lessons learned from these career transitions and the diffusion of knowledge among them could be accomplished more effectively by recognizing their associated intricacies. These lessons
learned in the transitioning to AI in the medical imaging of COVID-19 can inform and enhance future AI applications, making the whole of the transitions more than the sum of each discipline, for confronting an emergency like the COVID-19 pandemic or solving emerging problems in biomedicine.
An algorithm is under development which can be used to detect bone cancer in canine thermograms for these body parts: elbow/knee, both anterior and lateral camera views, and wrist, lateral view only. Currently, veterinary clinical practice uses several imaging techniques including radiology, computed tomography (CT), and magnetic resonance imaging (MRI). But harmful radiation involved during imaging, expensive equipment setup, excessive time and the need for a cooperative patient during imaging, are major drawbacks of these techniques. In veterinary procedures, it is very difficult for animals to remain still for the time periods necessary for standard imaging without resorting to sedation – which creates another set of complexities. The algorithm has been optimized through thousands of experiments to identify bone cancer in thermographic images. Optimal histogram features, Laws texture features and gray level co-occurrence matrix (GLCM) texture features are extracted and the data is normalized using standard normal density and softmax normalization. Euclidean, Minkowski, and Tanimoto comparison metrics are used with nearest centroid for pattern classification. Classification success rates as high as 88% for elbow/knee anterior, 85% for wrist lateral and 86% elbow/knee lateral have been achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.