Quantitative analysis of the regional motion of the left hemi-diaphragm (LHD) and right hemi-diaphragm (RHD) can provide information regarding the distribution and severity of abnormalities in individual patients with conditions that affect respiration such as thoracic insufficiency syndrome (TIS). Such motion can be captured effectively from dynamic magnetic resonance imaging (dMRI) which does not involve ionizing radiation and can be obtained under free-breathing conditions. The analysis of motion can be performed on the diaphragm using 4D images constructed from dMRI, which in turn requires diaphragm segmentation in the 4D images. In this paper, we present our methodology for segmentation of the left and right diaphragms, which has been implemented in three steps: recognition of diaphragm, delineation of diaphragm, and splitting of diaphragm along the mid-sagittal plane into LHD and RHD. The challenges involved in dMRI images are low resolution, motion blur, suboptimal contrast resolution, inconsistent meaning of gray-level intensities for the same object across multiple scans, and low signal-to-noise ratio. Utilizing 200 and 100 3D images for training and testing, respectively, an average location error of one and a half voxels is achieved for the recognition step. For the delineation step, an average mean-HD of one and a half pixels is achieved. The mid-sagittal plane is identified within a quarter of a voxel. These results are promising, showing that our system can cope with the aforesaid challenges.
In pediatric patients with respiratory abnormalities, it is important to understand the alterations in regional dynamics of the lungs and other thoracoabdominal components, which in turn requires a quantitative understanding of what is considered as normal in healthy children. Currently, such a normative database of regional respiratory structure and function in healthy children does not exist. The purpose of this study is to introduce a large open-source normative database from our ongoing Virtual Growing Child (VGC) project, which includes measurements of volumes, architecture, and regional dynamics in healthy children (six to 20 years) derived via dynamic Magnetic Resonance Imaging (dMRI) images. The database provides four categories of regional respiratory measurement parameters including morphological, architectural, dynamic, and developmental. The database has 3,820 3D segmentations (around 100,000 2D slices with segmentations), which to our knowledge is the largest dMRI dataset of healthy children. The database is unique and provides dMRI images, object segmentations, and quantitative regional respiratory measurement parameters for healthy children. The database can serve as a reference standard to quantify regional respiratory abnormalities on dMRI in young patients with various respiratory conditions and facilitate treatment planning and response assessment. The database can be useful to advance future AI-based research on MRI-based object segmentation and analysis.
It is important to understand the dynamic thoracoabdominal architecture and its change after surgery since thoracic insufficiency syndrome (TIS) patients often suffer from spinal deformation, leading to alterations in regional respiratory structure and function. Free-breathing based quantitative dynamic MRI (QdMRI) provides a practical solution to evaluate the regional dynamics of the thorax quantitatively for TIS patients. Our current aim is to investigate if QdMRI can also be utilized to measure architecture for TIS patients before and after surgery. 49 paired TIS patients (before and after surgery, with 98 dynamic MRI), and another 150 healthy children comprise our study cohort. 248 dynamic MRI images were first acquired and then 248 4D images were constructed. 3D volume images at end expiration (EE) and end inspiration (EI) were used in the analysis, leading to a total of 496 3D volume images in this study. Left and right lungs, left and right hemi-diaphragms, left and right kidneys, and liver were then segmented automatically via deep learning prior to architectural analysis. Architectural parameters (3D distances and angles from the centroids of multiple objects) at EE and EI of TIS patients and healthy children were computed and compared via t-testing. The distance between the right lung and right hemi-diaphragm is found to be significantly larger at EI than that at EE for TIS patients and healthy children, and after surgery becomes closer to that of healthy children.
KEYWORDS: Cancer detection, Deep learning, Image segmentation, Education and training, Tumors, Pathology, Object detection, Fluorescence, Data modeling
Circulating Tumor Cells, or CTCs, are cancerous cells that shed from a primary tumor and intravasate into the bloodstream. This type of screening, otherwise known as biopsy screening, is effective at determining the early stages of cancer and discussing treatment. Expert cytopathologists have been required to look at these images to screen for cancer. Anything between the numbers of hundreds to thousands of images must be gone through. At a heavy time cost and a basis of work effort, we thus proposed an idea using a U-Net to screen for these CTCs and enumerate them in a more efficient and time-costing method. The Ball-scale transform technique, a filter that allows us to determine the maximum sphericality in a thresholded homogeneity, was introduced into this field of digital pathology alongside our proposed novel deep learning-based (UNet) CTC detection and enumeration approach. We collected 466 images for CTC detection and another 198 images with 323 CTCs for testing CTC enumeration. We investigated two ways to use the Ball-scale image: using B-scale images in the input channel of deep learning and using B-scale images in the output layer by providing high-level information (size and shape) encoded in the B-scale image itself to do the enumeration. We also tested deep learning-based CTC detection by using different labels. Results show that our method is much better than those which utilize thresholding with a missing rate comparison of 0.04 to 0.30. Meanwhile, our method is certainly comparable and competitive with the results in recent publications and may facilitate other types of research.
Currently, 3T hip MRI can be used to estimate femur strength and cortical bone thickness. One of the major hurdles in this application is that objects (osseous structures) are manually segmented which involves significant human labor. In this study, we propose an automatic and accurate algorithm for osseous structure segmentation from hip 3T MRI by using a deep convolutional neural network. The approach includes two stages: 1) automatic localization of acetabulum and femur by using the femoral head as a reference, and 2) 2D bounding box (BB) set up for each object based on the localization information from femoral head followed by a UNet to segment the target object within the BB. 90 3T hip MRI image data sets were utilized in this study that were divided into training, validating, and testing groups (60%:20%:20%), and a 5-fold cross-validation was adopted in the procedure. The study showed that automated segmentation results were comparable to the reference standard from manual segmentation. The average Dice Coefficient for acetabular and femoral (i.e., cortical and medullary bone plus bone marrow) segmentation was 0.93 and 0.96, respectively. Segmentations of acetabular and femoral medullary cavity (i.e., medullary bone plus bone marrow) had Dice Coefficient of 0.89 and 0.95, respectively. Acetabular and femoral cortical bone segmentations were more challenging with lower Dice Coefficient of around 0.7. The proposed approach is automatic and effective without any interaction from humans. The idea of using local salient anatomy to guide object localization approaches is heuristic and can be easily generalized to other localization problems in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.