Automation in the field of medical image segmentation is critical in helping the oncologists and surgeons for the accurate analysis of several pathological conditions by saving time. The ability to automatically segment the liver fast and accurately enables clinicians to understand the anatomical structure of the organ, and helps in the decision making process of diagnosis, surgery planning and as an anatomical map during surgical navigation especially important when using intraoperative image modalities . This work aims to develop an automatic liver parenchyma segmentation network which is based on U-Net architecture, a widely used architecture for medical image segmentation. This modified U-Net architecture includes reduced convolutional layers and using dropout layers as well as pre-processing the dataset to overcome the constraints of a small sample set. Reduced architecture complexity and introducing dropout regularization, addresses the problem of overfitting. We experimented with a callback for observing the training failure where it follows the early stopping policy and selecting the best model. Adding Gaussian noise to data can help the model to generalise well. For choosing the appropriate loss function we tested four different loss functions; Dice, binary cross entropy, Tversky and focal Tversky and concluded that Dice performs better. The network has been trained and validated using publicly available 3D-IRCADb dataset with images from 20 patients and achieved an overall Dice score of 94.5%. The overall objective of this work is to construct a network from a small sample set without the problem of overfitting or under-fitting, but delivering an acceptable Dice score.
KEYWORDS: Medical imaging, Ultrasonography, Mobile devices, Video, Data modeling, Imaging systems, 3D image reconstruction, Medicine, Complex systems, Dielectrophoresis
Introduction: Medical imaging technology has revolutionized health care over the past 30 years. This is especially true for ultrasound, a modality that an increasing amount of medical personal is starting to use. Purpose: The purpose of this study was to develop and evaluate a platform for improving medical image interpretation skills regardless of time and space and without the need for expensive imaging equipment or a patient to scan. Methods, results and conclusions: A stable web application with the needed functionality for image interpretation training and evaluation has been implemented. The system has been extensively tested internally and used during an international course in ultrasound-guided neurosurgery. The web application was well received and got very good System Usability Scale (SUS) scores.
KEYWORDS: Video, Ultrasonography, Image-guided intervention, Medical imaging, 3D video streaming, Computed tomography, 3D image processing, Imaging systems, Visualization, Surgery
The image-guided surgery toolkit (IGSTK) is an open source C++ library that provides the basic components required
for developing image-guided surgery applications. While the initial version of the toolkit has been released, some
additional functionalities are required for certain applications. With increasing demand for real-time intraoperative image
data in image-guided surgery systems, we are adding a video grabber component to IGSTK to access intraoperative
imaging data such as video streams. Intraoperative data could be acquired from real-time imaging modalities such as
ultrasound or endoscopic cameras. The acquired image could be displayed as a single slice in a 2D window or integrated
in a 3D scene. For accurate display of the intraoperative image relative to the patient's preoperative image, proper
interaction and synchronization with IGSTK's tracker and other components is necessary. Several issues must be
considered during the design phase: 1) Functions of the video grabber component 2) Interaction of the video grabber
component with existing and future IGSTK components; and 3) Layout of the state machine in the video grabber
component. This paper describes the video grabber component design and presents example applications using the video
grabber component.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.