To improve the inter-slice resolution of the orbital bones including cortical and thin bones, we propose an Orbital Bone- Super Resolution (OB-SR) with combined sagittal and axial MAE loss and sagittal Thin-Bone Structure Aware (TSA) loss. Our method consists of three stages: data preprocessing, intermediate slices generation in the sagittal plane and the orbital bone quality improvement in the axial plane. In the intermediate slices generation stage, a 2D CNN consisting of 6 convolutional layers for feature extraction, an up-sampling layer, and 4 convolutional layers for high-frequency detail refinement is performed. The loss is calculated as the sum of the two mean absolute error (MAE) losses to improve the quality of the output image from the up-sampling layer and the last convolutional layer. In the orbital bone quality improvement stage, the generated intermediate slices are compared to the corresponding original axial images using the axial MAE loss. Experimental results showed that our method can generate the intermediate slices with clear orbital bones in the sagittal and axial images and enhances the boundaries of the thin bone.
We generate super-resolution orbital bone CT images using 3D SRGAN network to improve the accuracy of segmentation of thin bones with large slice thickness. Our method consists of data preparation and super-resolution thin bone image generation. Experimental results show that the generated super-resolution images using 3D SRGAN are clearly observed in thin bone areas and are more similar to the original high resolution images than other images using bilinear and bicubic interpolations.
Sagittal craniosynostosis is the most common craniosynostosis and the premature closure of the sagittal suture makes the skull shape biparietal narrow and elongated. Surgery in infants with sagittal craniosynostosis is a common treatment for correcting the shape of the deformed skull, but the evaluation of the morphological improvement of the skull before and after operation is made by the surgeon’s subjective judgment. Therefore, we propose an efficient method to quantify the shape of the skull before and after surgery based on the mean normal skull model to assess the surgical outcome. In the preprocessing step, to construct the skull model from the pre- and post-operative CT images, each skull model is constructed consisting of the outer surface of the skull. To distinguish individual cranial bones separated by sutures, mean normal skull model is composed of five cranial bones. In the skull model deformation step, to distinguish from the whole bone of the preoperative skull model to the regional bone, the mean normal skull model is deformed into a preoperative skull model, and the deformed mean normal skull model is again deformed into a postoperative skull model. In the regional skull shape index calculation step, to evaluate the degree of expansion and reduction of postoperative skull relative to the preoperative skull, the regional skull shape index is calculated. Experimental results showed that our regional skull shape index can quantify the degree of expansion and reduction of the postoperative skull relative to the preoperative skull of each cranial bone.
Segmentation of the orbital bone is necessary for orbital wall reconstruction in cranio-maxillofacial surgery to support the eyeball position and restore the volume and shape of the orbit. However, orbital bone segmentation has a challenging issue that the orbital bone is composed of high-intensity cortical bones and low-intensity trabecular and thin bones. Especially, the thin bones of the orbital medial wall and the orbital floor have similar intensity values that are indistinguishable from surrounding soft tissues due to the partial volume effect that occurs when CT images are generated. Thus, we propose an orbital bone segmentation method using multi-graylevel FCNs that segment cortical bone, trabecular bone and thin bones with different intensities in head-and-neck CT images. To adjust the image properties of each dataset, pixel spacing normalization and the intensity normalization is performed. To overcome the under-segmentation of the thin bones of the orbital medial wall, a single orbital bone mask is divided into cortical and thin bone masks. Multi-graylevel FCNs are separately trained on the cortical and thin bone masks based on 2D U-Net, and each cortical and thin bone segmentation result is integrated to obtain the whole orbital bone segmentation result. As a result, it showed that multi-graylevel FCNs improves segmentation accuracy of the thin bones of the medial wall compared to a single gray-level FCNs and thresholding.
This paper proposes morphological descriptors representing the degree of skull deformity for craniosynostosis in head CT images and a hierarchical classifier model distinguishing among normal and different types of craniosynostosis. First, to compare deformity surface model with mean normal surface model, mean normal surface models are generated for each age range and the mean normal surface model is deformed to the deformity surface model via multi-level threestage registration. Second, four shape features including local distance and area ratio indices are extracted in each five cranial bone. Finally, hierarchical SVM classifier is proposed to distinguish between the normal and deformity. As a result, the proposed method showed improved classification results compared to traditional cranial index. Our method can be used for the early diagnosis, surgical planning and postsurgical assessment of craniosynostosis as well as quantitative analysis of skull deformity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.