PURPOSE: Accurate preoperative planning is crucial for liver resection surgery due to the complex anatomical structures and variations among patients. The need of virtual resections utilizing deformable surfaces presents a promising approach for effective liver surgery planning. However, the range of available surface definitions poses the question of which definition is most appropriate. METHODS: The study compares the use of NURBS and B´ezier surfaces for the definition of virtual resections through a usability study, where 25 participants (19 biomedical researchers and 6 liver surgeons) completed tasks using varying numbers of control points driving surface deformations and different surface types. Specifically, participants aim to perform virtual liver resections using 16 and 9 control points for NURBS and B´ezier surfaces. The goal is to assess whether they can attain an optimal resection plan, effectively balancing complete tumor removal with the preservation of enough healthy liver tissue and function to prevent postoperative liver dysfunction, despite working with fewer control points and different surface properties. Accuracy was assessed using Hausdorff distance and average surface distance. A survey based on the NASA Task Load Index measured user performance and preferences. RESULTS: NURBS surfaces exhibit improved accuracy and consistency over B´ezier surfaces, with lower average surface distance and variability of results. The 95th percentile Hausdorff Distance indicates the robustness of NURBS surfaces for the task. Task completion time was influenced by control point dimensions, favoring NURBS 3x3 (vs. 4x4) surfaces for a balanced accuracy-efficiency trade-off. Finally, the survey results indicated participants preferred NURBS surfaces over B´ezier, emphasizing the improved performance, surface manipulation, and reduced effort. CONCLUSION: The integration of NURBS surfaces into liver resection planning offers a promising advancement. This study demonstrates their superiority in accuracy, efficiency, and user preference compared to B´ezier surfaces. The findings underscore the potential of NURBS-based preoperative planning tools to enhance surgical outcomes in liver resection procedures.
Glioblastoma Multiforme (GBM) is the most common and most lethal primary brain tumor in adults with a five-year survival rate of 5%. The current standard of care and survival rate have remained largely unchanged due to the degree of difficulty in surgically removing these tumors, which plays a crucial role in survival, as better surgical resection leads to longer survival times. Thus, novel technologies need to be identified to improve resection accuracy. Our study features a curated database of GBM and normal brain tissue specimens, which we used to train and validate a multi-instance learning model for GBM detection via rapid evaporative ionization mass spectrometry. This method enables real-time tissue typing. The specimens were collected by a surgeon, reviewed by a pathologist, and sampled with an electrocautery device. The dataset comprised 276 normal tissue burns and 321 GBM tissue burns. Our multi-instance learning model was adapted to identify the molecular signatures of GBM, and we employed a patient-stratified four-fold cross-validation approach for model training and evaluation. Our models demonstrated robustness and outperformed baseline models with an improved AUC of 0.95 and accuracy of 0.95 in correctly classifying GBM and normal brain. This study marks the first application of deep learning to REIMS data for brain tumor tissue characterization. This study sets the foundation for investigating more clinically relevant questions where intraoperative tissue detection in neurosurgery is pertinent.
Up to 35% of breast-conserving surgeries fail to resect all the tumors completely. Ideally, machine learning methods using the iKnife data, which uses Rapid Evaporative Ionization Mass Spectrometry (REIMS), can be utilized to predict tissue type in real-time during surgery, resulting in better tumor resections. As REIMS data is heterogeneous and weakly labeled, and datasets are often small, model performance and reliability can be adversely affected. Self-supervised training and uncertainty estimation of the prediction can be used to mitigate these challenges by learning the signatures of input data without their label as well as including predictive confidence in output reporting. We first design an autoencoder model using a reconstruction pretext task as a self-supervised pretraining step without considering tissue type. Next, we construct our uncertainty-aware classifier using the encoder part of the model with Masksembles layers to estimate the uncertainty associated with its predictions. The pretext task was trained on 190 burns collected from 34 patients from Basal Cell Carcinoma iKnife data. The model was further trained on breast cancer data comprising of 200 burns collected from 15 patients. Our proposed model shows improvement in sensitivity and uncertainty metrics of 10% and 15.7% over the baseline, respectively. The proposed strategies lead to improvements in uncertainty calibration and overall performance, toward reducing the likelihood of incomplete resection, supporting removal of minimal non-neoplastic tissue, and improved model reliability during surgery. Future work will focus on further testing the model on intraoperative data and additional exvivo data following collection of more breast samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.