Biomedical photoacoustics is usually used to image absorption-based contrast in soft tissues up to depths of several centimeters and with sub-millimeter resolution. By contrast, measuring Photoacoustic (PA) signals through hard bone tissue shows severe signal degradation due to aberration and high attenuation of high frequency acoustic signal components. This is particularly noticeable when measuring through thicker, human, skull bone. Which is the main reason why transcranial PA imaging in humans has so far proved challenging to implement. To tackle this challenge, we developed an optical resonator sensor based on a previous planar-concave design. This sensor was found to be highly suitable for measuring the low-pressure amplitude and low acoustic frequency signals that are transmitted through human cranial bone. A plano-concave optical resonator sensor was fabricated to provide high sensitivity in the acoustic frequency range of DC to around 2 MHz, a low noise equivalent pressure and a small active element size enabling it to significantly outperform conventional piezoelectric transducers when measuring PA waves transmitted through ex vivo human cranial bones.
While photoacoustic imaging can reach depths of several centimeters in soft tissue, bone tissue is hard to penetrate. Which is why so far, transcranial photoacoustic imaging has proved challenging to implement - with the main challenge being acoustic losses in the skull. Our overall goal is to investigate the feasibility of transcranial photoacoustic sensing and imaging, and to study its usefulness for monitoring hemodynamics. In this study we focus on the acoustic constraints imposed by the skull and present the initial results from our ex vivo human skull phantoms.
KEYWORDS: Monte Carlo methods, Data modeling, Error analysis, Optoacoustics, Copper, Absorption, Scattering, Signal to noise ratio, Machine learning, Blood
Significance: Quantitative measurement of blood oxygen saturation (sO2) with optoacoustic (OA) imaging is one of the most sought after goals of quantitative OA imaging research due to its wide range of biomedical applications.
Aim: A method for accurate and applicable real-time quantification of local sO2 with OA imaging.
Approach: We combine multiple illumination (MI) sensing with learned spectral decoloring (LSD). We train LSD feedforward neural networks and random forests on Monte Carlo simulations of spectrally colored absorbed energy spectra, to apply the trained models to real OA measurements. We validate our combined MI-LSD method on a highly reliable, reproducible, and easily scalable phantom model, based on copper and nickel sulfate solutions.
Results: With this sulfate model, we see a consistently high estimation accuracy using MI-LSD, with median absolute estimation errors of 2.5 to 4.5 percentage points. We further find fewer outliers in MI-LSD estimates compared with LSD. Random forest regressors outperform previously reported neural network approaches.
Conclusions: Random forest-based MI-LSD is a promising method for accurate quantitative OA oximetry imaging.
Photoacoustics Imaging is an emerging imaging modality enabling the recovery of functional tissue parameters such as blood oxygenation. However, quantifying these still remains challenging mainly due to the non-linear influence of the light fluence which makes the underlying inverse problem ill-posed. We tackle this gap with invertible neural networks and present a novel approach to quantifying uncertainties related to reconstructing physiological parameters, such as oxygenation. According to in silico experiments, blood oxygenation prediction with invertible neural networks combined with an interactive visualization could serve as a powerful method to investigate the effect of spectral coloring on blood oxygenation prediction tasks.
One of the major applications of multispectral photoacoustic imaging is the recovery of functional tissue properties with the goal of distinguishing different tissue classes. In this work, we tackle this challenge by employing a deep learning-based algorithm called learned spectral decoloring for quantitative photoacoustic imaging. With the combination of tissue classification, sO2 estimation, and uncertainty quantification, powerful analyses and visualizations of multispectral photoacoustic images can be created. Consequently, these could be valuable tools for the clinical translation of photoacoustic imaging.
The International Photoacoustic Standardisation Consortium (IPASC) emerged from SPIE 2018, established to drive consensus on photoacoustic system testing. As photoacoustic imaging (PAI) matures from research laboratories into clinical trials, it is essential to establish best-practice guidelines for photoacoustic image acquisition, analysis and reporting, and a standardised approach for technical system validation. The primary goal of the IPASC is to create widely accepted phantoms for testing preclinical and clinical PAI systems. To achieve this, the IPASC has formed five working groups (WGs). The first and second WGs have defined optical and acoustic properties, suitable materials, and configurations of photoacoustic image quality phantoms. These phantoms consist of a bulk material embedded with targets to enable quantitative assessment of image quality characteristics including resolution and sensitivity across depth. The third WG has recorded details such as illumination and detection configurations of PAI instruments available within the consortium, leading to proposals for system-specific phantom geometries. This PAI system inventory was also used by WG4 in identifying approaches to data collection and sharing. Finally, WG5 investigated means for phantom fabrication, material characterisation and PAI of phantoms. Following a pilot multi-centre phantom imaging study within the consortium, the IPASC settled on an internationally agreed set of standardised recommendations and imaging procedures. This leads to advances in: (1) quantitative comparison of PAI data acquired with different data acquisition and analysis methods; (2) provision of a publicly available reference data set for testing new algorithms; and (3) technical validation of new and existing PAI devices across multiple centres.
Multispectral photoacoustic (PA) imaging is a prime modality to monitor hemodynamics and changes in blood oxygenation (sO2). Although sO2 changes can be an indicator of brain activity both in normal and in pathological conditions, PA imaging of the brain has mainly focused on small animal models with lissencephalic brains. Therefore, the purpose of this work was to investigate the usefulness of multispectral PA imaging in assessing sO2 in a gyrencephalic brain. To this end, we continuously imaged a porcine brain as part of an open neurosurgical intervention with a handheld PA and ultrasonic (US) imaging system in vivo. Throughout the experiment, we varied respiratory oxygen and continuously measured arterial blood gases. The arterial blood oxygenation (SaO2) values derived by the blood gas analyzer were used as a reference to compare the performance of linear spectral unmixing algorithms in this scenario. According to our experiment, PA imaging can be used to monitor sO2 in the porcine cerebral cortex. While linear spectral unmixing algorithms are well-suited for detecting changes in oxygenation, there are limits with respect to the accurate quantification of sO2, especially in depth. Overall, we conclude that multispectral PA imaging can potentially be a valuable tool for change detection of sO2 in the cerebral cortex of a gyrencephalic brain. The spectral unmixing algorithms investigated in this work will be made publicly available as part of the open-source software platform Medical Imaging Interaction Toolkit (MITK).
Real-time monitoring of functional tissue parameters, such as local blood oxygenation, based on optical imaging could provide groundbreaking advances in the diagnosis and interventional therapy of various diseases. Although photoacoustic (PA) imaging is a modality with great potential to measure optical absorption deep inside tissue, quantification of the measurements remains a major challenge. We introduce the first machine learning-based approach to quantitative PA imaging (qPAI), which relies on learning the fluence in a voxel to deduce the corresponding optical absorption. The method encodes relevant information of the measured signal and the characteristics of the imaging system in voxel-based feature vectors, which allow the generation of thousands of training samples from a single simulated PA image. Comprehensive in silico experiments suggest that context encoding-qPAI enables highly accurate and robust quantification of the local fluence and thereby the optical absorption from PA images.
KEYWORDS: Software development, Blood, Photoacoustic spectroscopy, In vivo imaging, Ultrasonography, Imaging systems, Scanners, Medical imaging, Control systems, Ultrasonics
Photoacoustic (PA) systems based on clinical linear ultrasound arrays have become increasingly popular in translational PA research. Such systems can more easily be integrated in a clinical workflow due to the simultaneous access to ultrasonic imaging and their familiarity of use to clinicians. In contrast to more complex setups, handheld linear probes can be applied to a large variety of clinical use cases. However, most translational work with such scanners is based on proprietary development and as such not accessible to the community.
In this contribution, we present a custom-built, hybrid, multispectral, real-time photoacoustic and ultrasonic imaging system with a linear array probe that is controlled by software developed within the highly customisable and extendable open-source software platform Medical Imaging Interaction Toolkit (MITK). Our software offers direct control of both the laser and the ultrasonic system and may thus serve as a starting point for various translational research and development. To demonstrate the extensibility of our system, we developed an open-source software plugin for real-time in vivo blood oxygenation measurements. Blood oxygenation is estimated by spectral unmixing of hemoglobin chromophores. The performance is demonstrated on in vivo measurements of the common carotid artery as well as peripheral extremity vessels of healthy volunteers.
KEYWORDS: Reconstruction algorithms, Photoacoustic spectroscopy, Signal to noise ratio, Ultrasonography, Transducers, Image resolution, Pulsed laser operation, Chromophores, In vitro testing, In vivo imaging
Reconstruction of photoacoustic images acquired with clinical ultrasound transducers is traditionally performed using the delay and sum (DAS) beamforming algorithm. Recently, the delay multiply and sum (DMAS) beamforming algorithm has been shown to provide increased contrast, signal to noise ratio (SNR) and resolution in PA imaging. The main reason for the continued use of DAS beamforming in photoacoustics is its linearity in reconstructing the PA signal to the initial pressure generated by the absorbed laser pulse. This is crucial for the identification of different chromophores in multispectral PA applications and DMAS has not yet been demonstrated to provide this property. Furthermore, due to its increased computational complexity, DMAS has not yet been shown to work in real time.
We present an open-source real-time variant of the DMAS algorithm which ensures linearity of the reconstruction while still providing increased SNR and therefore enables use of DMAS for multispectral PA applications. This is demonstrated in vitro and in vivo. The DMAS and reference DAS algorithms were integrated in the open-source Medical Imaging Interaction Toolkit (MITK) and are available to the community as real-time capable GPU implementations.
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
KEYWORDS: Sensors, Monte Carlo methods, Image processing, Photoacoustic spectroscopy, Reconstruction algorithms, Computer simulations, Error analysis, Data modeling, Tissues, Medical imaging
Quantification of tissue properties with photoacoustic (PA) imaging typically requires a highly accurate representation of the initial pressure distribution in tissue. Almost all PA scanners reconstruct the PA image only from a partial scan of the emitted sound waves. Especially handheld devices, which have become increasingly popular due to their versatility and ease of use, only provide limited view data because of their geometry. Owing to such limitations in hardware as well as to the acoustic attenuation in tissue, state-of-the-art reconstruction methods deliver only approximations of the initial pressure distribution. To overcome the limited view problem, we present a machine learning-based approach to the reconstruction of initial pressure from limited view PA data. Our method involves a fully convolutional deep neural network based on a U-Net-like architecture with pixel-wise regression loss on the acquired PA images. It is trained and validated on in silico data generated with Monte Carlo simulations. In an initial study we found an increase in accuracy over the state-of-the-art when reconstructing simulated linear-array scans of blood vessels.
KEYWORDS: Photoacoustic tomography, Angiography, Acquisition tracking and pointing, Tissues, Signal processing, Calibration, Tissue optics, 3D image processing, Optical imaging, Signal to noise ratio, 3D image reconstruction, Absorption, Imaging systems, Visualization
Photo-acoustic tomography (PAT) is capable of imaging optical absorption in depths beyond the diffusion limit. As blood is one of the main absorbers in tissue, one important application is the visualization of vasculature, which can provide important clues for diagnosing diseases like cancer. While the state-of-the-art work in photo-acoustic 3D angiography has focused on computed tomography systems involving complex setups, we propose an approach based on optically tracking a freehand linear ultrasound probe that can be smoothly integrated into the clinical workflow. To this end, we present a method for calibration of a PAT system using an N-wire phantom specifically designed for PAT and show how to use local gradient information in the 3D reconstructed volume to significantly enhance the signal. According to experiments performed with a tissue mimicking intra-lipid phantom, the signal-to-noise ratio, contrast and contrast-to-noise ratio measured in the full field of view of the linear probe can be improved by factors of 1.7±0.7, 14.6±5.8 and 2.8±1.2 respectively, when comparing the post envelope detection reconstructed 3D volume with the processed one. Qualitative validation performed in tissue mimicking gelatin phantoms further showed good agreement of the reconstructed vasculature with corresponding structures extracted from X-ray computed tomographies. As our method provides high contrast 3D images of the vasculature despite a low hardware complexity its potential for clinical application is high.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.