PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11731, including the Title Page, Copyright information and Table of Contents
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we seek to demonstrate acceptable classification performance for classifiers trained using augmented training data of synthetic cells imaged by simulated holography and evaluated using experimentally collected holographic data. In particular, we utilize experimentally collected DHM phase maps derived from the MDA-MB-231 breast cancer cell and immortalized human gingival fibroblast (GIE) cell lines. Automated segmentation was performed using a floodfill clustering approach and experimental feature distributions were used to generate statistically random synthetic cell realizations for each class.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer-generated holography (CGH) has enabled the formation of arbitrary images through complex spatial light modulation. The optimization of spatial light modulators (SLMs) and diffractive optical elements (DOEs) is aimed to solve the well-known phase retrieval problem. This paper proposes a physically constrained artificial neural network (ANN) designed to solve the phase retrieval problem for CGH. We show that through careful selection of model structural parameters and by limiting the scope of model optimization, we can encode Fresnel Diffraction equations directly into an ANN. We train the proposed model to overfit to a single image, i.e., the model finds the SLM phase delays required to produce the desired image. The proposed model performs well with outputs that qualitatively compare well with ideal images. The method proposed in this work holds value for those who require confidence that their machine learning techniques are physically realizable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Brain tumor patients frequently experience tumor-induced alterations in cognitive functions. The early detection of such alterations becomes imperative in the clinical environment and with this the need for computational tools that are capable of quantitatively characterizing functional connectivity changes observed in brain imaging data. This paper presents the application of a novel modern control concept, pinning controllability, to determine intervention points (driver nodes) in the brain tumor-bearing resting-state connectome. The theoretical frameworks provides us with the minimal number of "driver nodes", and their location to determine the full control over the obtained graph network in order to provide a change in the network's dynamics from an initial state (disease) to a desired state (non-disease). Thus we are able to quantify the tumor-induced alterations in different brain regions and the differences in brain connectivity and dynamics. The achieved results will provide clinicians with techniques to identify more tumor-affected regions and biological pathways for brain cancer, to design and test novel therapeutic solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We demonstrate a technique for restoring imagery using a computational imaging camera with a phase mask that produces a blurred, space-variant point spread function (PSF). To recover arbitrary images, we first calibrate the computational imaging process utilizing Karheunen-Loeve Decomposition, where the PSFs are sampled across the field of view of the camera system. These PSFs can be transformed into a series of spatially invariant "eigen-PSFs", each with an associated coefficient matrix. Thus the act of performing a spatially varying image deconvolution can be changed into a weighted sum of spatially invariant deconvolutions. After demonstrating this process on simulated data, we also show real-world results from a camera system modified with a diffractive waveplate, and provide a brief discussion on processing time and tradeoffs inherent to the technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photoacoustic imaging is becoming a very promising tool for the research of living organisms. It combines the high contrast of optical imaging and the high resolution of acoustic imaging to realize the imaging of absorption clusters in biological tissues. Since the scattering of ultrasound signals in biological tissues is 2-3 orders of magnitude weaker than the scattering of light in biological tissues, the endogenous absorption difference of tissues is directly used in the imaging process, so photoacoustic imaging has the advantages of deep imaging depth and non-destructive. As an important branch of photoacoustic imaging, photoacoustic microscopy can provide micron-level or even sub-micron-level imaging resolution, which is of great significance for biological research such as blood vessel detection. Since the lateral resolution of the photoacoustic microscopy imaging system depends on the focus of the laser, a higher resolution can be obtained by increasing the numerical aperture of the condenser objective. However, a large numerical aperture usually means a shorter working distance and makes the entire imaging system very sensitive to small optical defects. Therefore, the improvement of resolution through this method will be limited in practical applications. This paper implements a method of using iterative deconvolution to obtain a high-resolution photoacoustic image of the brain. The focal spot of the photoacoustic microscopy is measured to obtain the lateral PSF (point spread function) of the system. Making the measured PSF as the initial system PSF to perform Lucy- Richardson (LR) deconvolution. The image resolution of cerebral vasculature obtained by this method is higher. The full width at half maximum (FWHM) of width of the same cerebral capillaries before and after deconvolution are 7 μm and 3.6 μm, respectively, and the image definition is increased by about 1.9 times. Experiments show that this method can further improve the clarity of photoacoustic images of cerebral capillaries, which lays the foundation for further research on brain imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging under low-light conditions is a challenging but important problem due to low dynamic range, image noise, and blurriness. In this work, we propose blur-free low-light imaging techniques by combining a conventional color camera with an event camera. The event camera complements the color camera by measuring brightness changes asynchronously at high speed with high dynamic range. We synchronize the two sensors with external trigger cable. We align the viewpoints of the event and color using a beamsplitter. We co-calibrate the two cameras geometrically. We derive an image formation model and use the inverted model to reduce the blurriness in color images. Experimental results demonstrate the effectiveness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of science and technology and industrial production, the application of three-dimensional shape measurement technology in the fields of machine vision, industrial monitoring, mechanical engineering and medical testing has become increasingly widespread. The optical three-dimensional shape measurement technology is a method that uses a series of optical methods to obtain the three-dimensional information of the measured object, and the structured light measurement method is a widely used measurement method. In the structured light measurement method, the generation of fringe patterns is a very important link. Here, we use holographic method to generate a fringe pattern whose period and phase can be easily modulated. First, a black and white fringe pattern with a certain spatial frequency is generated according to the cosine structured light period and phase. The spatial frequency of the black and white fringe is determined by the spatial frequency of the structured light to be generated and the magnification of the projection system. A prism phase that can cause lateral movement is applied to the black part of the black and white stripes, so that the light incident on the black area deviates from the optical axis to the first order of diffraction, while the light incident on other parts (without being modulated) continues to travel along the optical axis. In this way, the deviated beam is bright and dark, and finally structured illumination is obtained. Theories and experiments verify the effectiveness of the method. The method to generate fringe patterns is simple, fast, accurate and can be conveniently controlled, and can be well applied to three-dimensional shape measurement technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative phase imaging (QPI) has emerged as a powerful computational tool that enables imaging unla- belled specimens with high contrast. It finds applications in microscopy, refractive index mapping , biomedical imaging and surface measurement. Several techniques including interferometry, holography, iterative methods and Transport of Intensity Equation have been developed over the years for QPI. However, the spatial resolution of the retrieved phase images are limited by the diffraction limit of the imaging system. Prior work on Super resolution phase imaging has been primarily focused on holography based techniques which require illumination sources with high coherence , phase unwrapping and high experimental stability. In this work, we propose a propagation based super resolution phase imaging technique using Contrast Transfer Function(CTF) and structured illumination. An enhancement in resolution by two folds is demonstrated using numerical results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning methods using convolution neural networks (CNN) are one technique which has played an important role in many biomedical applications. Here we apply the techniques to the detection of retinal blood vessels in ophthalmological images. Since the eye is one of the most important sensory organs in the human body, it is critical to diagnose diseases in the early stages. Early symptoms of various diseases like glaucoma, diabetic retinopathy, and cardiovascular diseases can be detected via the structure of the blood vessels of the retina. Studying the retinal blood vessel structures and network requires blood vessel segmentation. Deep learning has been used for the last five years and achieved state-of-the-art performance. More specifically, the latest development has been implemented to segment blood vessels more efficiently by using the u-net architecture. In the process, we used forward convolutional long short-term memory (convLSTM) to combine the feature map of the encoding path and the corresponding decoding path in lieu of simple concatenation in the skip connection of the u-net. We also used a connected convolution layer to the last layers of the encoding path to obtain more diverse features. The images used were from the DRIVE database and preprocessing was performed to obtain more accurate results. We achieved 95% accuracy, a precision of 91%, a sensitivity of 52%, and a specificity of 99%. Even with a low sensitivity of 52%, all the major blood vessels are found successfully.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Terahertz waves refer to electromagnetic waves with frequencies ranging from 0.1THz to 10THz.Due to the ability to penetrate many non-polar materials, terahertz waves can be used to detect hidden objects. A convolutional neural network structure called Attention U-Net to achieve super-resolution of terahertz images is proposed in this paper. The function of the convolutional layer and pooling layer in the encoding path is reducing the size and extracting the edge features of the image, while the role of the deconvolution layer in the decoding path is to up-sample the image and restore the image content. The introduction of skip connection on the feature map of the symmetrical encoding path and decoding path maximizes the utilization of feature information in each layer of the network and effectively solves the problem of gradient disappearance. This network also replaces the convolution on the codec path with the attention mechanism block, including the spatial attention mechanism and the channel attention mechanism, which makes the extracted features more directional, obtains more detailed information about the target that needs to be focused and suppresses other Useless information. The network and algorithm proposed in this paper have good results in experiments and have a wide range of application prospects in the field of security inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel network for low-light-level and visible image fusion enhancement task, which is based on the feature extraction convolution neural network. By extracting the high-frequency information of visible light detector under low illumination, and combining the advantages of wide activation network and channel attention mechanism, the network can automatically filter and extract the useful information in the image to complete the super-resolution reconstruction of low light level image. It makes up for the lack of visible light information and low resolution (LR) of low light level detector at night and can realize all-weather real-time imaging. The experimental results show that our method has better numerical performance than the traditional super-resolution network structure, and also retains more abundant image texture information, which is more in line with the feeling of human eyes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.