Quantitative analysis of the regional motion of the left hemi-diaphragm (LHD) and right hemi-diaphragm (RHD) can provide information regarding the distribution and severity of abnormalities in individual patients with conditions that affect respiration such as thoracic insufficiency syndrome (TIS). Such motion can be captured effectively from dynamic magnetic resonance imaging (dMRI) which does not involve ionizing radiation and can be obtained under free-breathing conditions. The analysis of motion can be performed on the diaphragm using 4D images constructed from dMRI, which in turn requires diaphragm segmentation in the 4D images. In this paper, we present our methodology for segmentation of the left and right diaphragms, which has been implemented in three steps: recognition of diaphragm, delineation of diaphragm, and splitting of diaphragm along the mid-sagittal plane into LHD and RHD. The challenges involved in dMRI images are low resolution, motion blur, suboptimal contrast resolution, inconsistent meaning of gray-level intensities for the same object across multiple scans, and low signal-to-noise ratio. Utilizing 200 and 100 3D images for training and testing, respectively, an average location error of one and a half voxels is achieved for the recognition step. For the delineation step, an average mean-HD of one and a half pixels is achieved. The mid-sagittal plane is identified within a quarter of a voxel. These results are promising, showing that our system can cope with the aforesaid challenges.
KEYWORDS: Anatomy, Statistical modeling, Image segmentation, Modeling, Education and training, Data modeling, Medical imaging, Artificial intelligence, Deep learning, Fuzzy logic
Organ segmentation is a crucial task in various medical imaging applications. Many deep learning models have been developed to do this, but they are slow and require a lot of computational resources. To solve this problem, attention mechanisms are used which can locate important objects of interest within medical images, allowing the model to segment them accurately even when there is noise or artifact. By paying attention to specific anatomical regions, the model becomes better at segmentation. Medical images have unique features in the form of anatomical information, which makes them different from natural images. Unfortunately, most deep learning methods either ignore this information or do not use it effectively and explicitly. Combined natural intelligence with artificial intelligence, known as hybrid intelligence, has shown promising results in medical image segmentation, making models more robust and able to perform well in challenging situations. In this paper, we propose several methods and models to find attention regions in medical images for deep learning-based segmentation via non-deep-learning methods. We developed these models and trained them using hybrid intelligence concepts. To evaluate their performance, we tested the models on unique test data and analyzed metrics including false negatives quotient and false positives quotient. Our findings demonstrate that object shape and layout variations can be explicitly learned to create computational models that are suitable for each anatomic object. This work opens new possibilities for advancements in medical image segmentation and analysis.
Organ segmentation is a fundamental requirement in medical image analysis. Many methods have been proposed over the past 6 decades for segmentation. A unique feature of medical images is the anatomical information hidden within the image itself. To bring natural intelligence (NI) in the form of anatomical information accumulated over centuries into deep learning (DL) AI methods effectively, we have recently introduced the idea of hybrid intelligence (HI) that combines NI and AI and a system based on HI to perform medical image segmentation. This HI system has shown remarkable robustness to image artifacts, pathology, deformations, etc. in segmenting organs in the Thorax body region in a multicenter clinical study. The HI system utilizes an anatomy modeling strategy to encode NI and to identify a rough container region in the shape of each object via a non-DL-based approach so that DL training and execution are applied only to the fuzzy container region. In this paper, we introduce several advances related to modeling of the NI component so that it becomes substantially more efficient computationally, and at the same time, is well integrated with the DL portion (AI component) of the system. We demonstrate a 9-40 fold computational improvement in the auto-segmentation task for radiation therapy (RT) planning via clinical studies obtained from 4 different RT centers, while retaining state-of-the-art accuracy of the previous system in segmenting 11 objects in the Thorax body region.
This paper proposes a deep neural network, Geographic Attention Model (GA-Net), for body composition tissue segmentation. By adding an auxiliary body area prediction task, our method exploits the rich semantic and spatial features contained in the body area and incorporates the features of both area and body composition tissue. In this way, GA-Net achieves superior performance for body composition tissue segmentation, especially for the indistinguishable boundaries of multiple tissues. And the enhanced representation ability of GA-Net also allows GA-Net to obtain well generalization performance on the limited dataset.
Recently, deep learning networks have achieved considerable success in segmenting organs in medical images. Several methods have used volumetric information with deep networks to achieve segmentation accuracy. However, these networks suffer from interference, risk of overfitting, and low accuracy as a result of artifacts, in the case of very challenging objects like the brachial plexuses. In this paper, to address these issues, we synergize the strengths of high-level human knowledge (i.e., Natural Intelligence (NI)) with deep learning (i.e., Artificial Intelligence (AI)) for recognition and delineation of the thoracic Brachial Plexuses (BPs) in Computed Tomography (CT) images. We formulate an anatomy-guided deep learning hybrid intelligence approach for segmenting thoracic right and left brachial plexuses consisting of two key stages. In the first stage (AAR-R), objects are recognized based on a previously created fuzzy anatomy model of the body region with its key organs relevant for the task at hand wherein high-level human anatomic knowledge is precisely codified. The second stage (DL-D) uses information from AAR-R to limit the search region to just where each object is most likely to reside and performs encoder-decoder delineation in slices. The proposed method is tested on a dataset that consists of 125 images of the thorax acquired for radiation therapy planning of tumors in the thorax and achieves a Dice coefficient of 0.659.
Quantitative analysis of the dynamic properties of thoraco-abdominal organs such as lungs during respiration could lead to more accurate surgical planning for disorders such as Thoracic Insufficiency Syndrome (TIS). This analysis can be done from semi-automatic delineations of the aforesaid organs in scans of the thoraco-abdominal body region. Dynamic magnetic resonance imaging (dMRI) is a practical and preferred imaging modality for this application, although automatic segmentation of the organs in these images is very challenging. In this paper, we describe an auto-segmentation system we built and evaluated based on dMRI acquisitions from 95 healthy subjects. For the three recognition approaches, the system achieves a best average location error (LE) of about one voxel for the lungs. The standard deviation (SD) of LE is about one to two voxels. For the delineation approach, the average Dice Coefficient (DC) is about 0.95 for the lungs. The standard deviation of DC is about 0.01 to 0.02 for the lungs. The system seems to be able to cope with the challenges posed by low resolution, motion blur, inadequate contrast, and image intensity non-standardness quite well. We are in the process of testing its effectiveness on TIS patient dMRI data and on other thoraco-abdominal organs including liver, kidneys, and spleen.
Organ localization is a common and essential preprocessing operation for many medical image analysis tasks. We propose a novel multi-organ localization method based on an end-to-end 3D convolutional neural network. The proposed algorithm employs a regression network to learn the position relationship between any patch and target organs in a medical computed tomography (CT) image. With this framework, it can iteratively localize the target organs in a coarse-to-fine manner. The main idea behind this method is to embed the anatomy of structures in a deep learning-based approach. For implementation, the proposed network outputs an 8-dimensional vector that contains information about the position, scale, and presence of each target organ. A piecewise loss function and a multi-density sampling strategy help to optimize this network to learn anatomy layout characteristics over the entire CT image. Starting from a random position, this network can accurately locate the target organ with a few iterations. Moreover, a dual-resolution strategy is employed to improve the accuracy affected by varying organ scales, further enhancing the localizing performance for all organs. We evaluate our method on a public data set (LiTS) to locate 11 organs in the thoraco-abdomino-pelvic region. The proposed method outperforms state-of-the-art methods with a mean intersection over union (IOU) of 80.84%, mean wall distance of 3.63 mm, and mean centroid distance of 4.93 mm, constituting excellent accuracy. The improvements on relatively small-size and medium-size organs are noteworthy
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.