SignificanceOral cancer surgery requires accurate margin delineation to balance complete resection with post-operative functionality. Current in vivo fluorescence imaging systems provide two-dimensional margin assessment yet fail to quantify tumor depth prior to resection. Harnessing structured light in combination with deep learning (DL) may provide near real-time three-dimensional margin detection.AimA DL-enabled fluorescence spatial frequency domain imaging (SFDI) system trained with in silico tumor models was developed to quantify the depth of oral tumors.ApproachA convolutional neural network was designed to produce tumor depth and concentration maps from SFDI images. Three in silico representations of oral cancer lesions were developed to train the DL architecture: cylinders, spherical harmonics, and composite spherical harmonics (CSHs). Each model was validated with in silico SFDI images of patient-derived tongue tumors, and the CSH model was further validated with optical phantoms.ResultsThe performance of the CSH model was superior when presented with patient-derived tumors (P-value<0.05). The CSH model could predict depth and concentration within 0.4 mm and 0.4 μg/mL, respectively, for in silico tumors with depths less than 10 mm.ConclusionsA DL-enabled SFDI system trained with in silico CSH demonstrates promise in defining the deep margins of oral tumors.
Fluorescence-guided surgery systems employed during oral cancer resection help detect the lateral margin yet fail to quantify the deep margins of the tumor prior to resection. Without comprehensive quantification of three-dimensional tumor margins, complete resection remains challenging. While interoperative techniques to assess the deep margin exist, they are limited in precision, leaving an unmet need for a system that can quantify depth. Our group is developing a deep learning (DL)-enabled fluorescence spatial frequency domain imaging (SFDI) system to address this limitation. The SFDI system captures fluorescence (F) and reflectance (R) images that contain information on tissue optical properties (OP) and depth sensitivity across spatial frequencies. Coupling DL with SFDI imaging allows for the near-real time construction of depth and concentration maps. Here, we compare three DL architectures that use SFDI images as inputs: i) F+OP, where OP (absorption and scattering) are obtained analytically from reflectance images; ii) F+R; iii) F/R. Training the three models required 10,000 tumor samples; synthetic tumors derived from composite spherical harmonics circumvented the need for patient data. The synthetic tumors were passed to a diffusion-theory light propagation model to generate a dataset of artificial SFDI images for DL training. Two oral cancer models derived from MRI of patient tongue tumors are used to evaluate DL performance in: i) in silico SFDI images ii) optical phantoms. These studies evaluate how system performance is affected by the SFDI input data and DL architectures. Future studies are required to assess system performance in vivo.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.