The relationship between neural activity, brain function, and corresponding biological behaviors remains a significant challenge in neuroscience. Exploring this relationship needs various optical imaging techniques to acquire real-time data with high spatial resolution. A promising technology recently is wearable miniaturized microscopes (mini scopes), which enable long-term neural activity recording in freely moving animals. However, most one-photon mini-scopes have limitations for imaging in-depth with high resolution and large field of view (FOV). To address this, we developed a one-photon miniaturized fluorescence microscope (1P-miniFM), intended for imaging of live brain neurons in free-behaving animals at subcellular level (~1.2 μm). We conducted specially designed optical path, achieving an imaging FOV of ~700 × 400 μm. In addition, we incorporated an electrowetting lens (EWL) to achieve a wide range of ~300 μm z-axis scanning with little resolution loss. 1P-miniFM is compact (11 × 17 × 24 mm) and lightweight (~2.9 g), causing little impediment to animals’ spontaneous behaviors. With genetically encoded calcium indicator GCaMP6s, we monitored neuron activities in secondary motor cortex (M2) during consecutive pain-related and sensory stimulations. We found that M2 neurons are key components and exhibit distinct variations in the response patterns. 1P-miniFM has potential as an excellent tool to explore relationships between neuron network and animal behaviors.
High-resolution three-dimensional brain image reconstruction is crucial for understanding the brain. Light sheet microscopy combined with tissue clearing imaging plays a pivotal role in analyzing the micro-level structure of mammalian brains. However, the complex multi-level stitching process poses challenges such as non-overlapping areas, surface deformation, and tissue loss, resulting in incomplete or discontinuous tissue structures at the junctions. These issues not only impact the precision of the atlas but also complicate subsequent analyses like cell counting and neuron tracing. To address these issues, we propose a rapid deep learning-based image inpainting approach for accurate neuron reconstruction and analysis. Our approach involves initially employing conventional registration algorithms to preliminarily stitch brain sections together, followed by utilizing a neural network to predict and restore missing tissue with a thickness exceeding 10 µm. This process enhances the structural continuity and integrity between adjacent brain slices. Compared to the original 3D U-Net and ResNet models, our approach performs better and has a processing speed that is five times faster than the original 3D U-Net. Moreover, our method enables more accurate cell counting by repairing incomplete cell bodies, leading to an average improvement of 37.37% in the number of cell bodies accurately counted near the slice junction. By integrating this novel 3D image inpainting network into brain reconstruction processes, our research opens new avenues for a more detailed and accurate investigation of neural circuitry and neurological disorders.
Non-invasive deep tissue imaging and focusing is highly demanded in biomedical research. However, for in vivo applications, the major challenge is the limited imaging depth, a of random scattering in biological tissue causing exponential attenuation of the ballistic component the light wave. Here we present the optical focusing with diffraction-limited resolution deep inside highly scattering media by using machine learning. Compared with conventional adaptive optics, method can not only provide high-speed sensor-less wavefront measurement with more than 90% accuracy, but also dramatically reduce photobleaching and photodamage. This technology paves the way for many important applications in fundamental biology research, especially in neuroscience.
The refractive index heterogeneity severely limits the imaging performance of optical microscopy in deep tissue. Adaptive optics (AO) is currently widely used to recover the diffraction-limited resolution at depth. However, there is a tradeoff between the time resolution and spatial resolution, which makes it difficult to achieve the real-time imaging in deep tissue. This is partially because that the effective correction area of conventional AO is limited with a single guide star (GS). Therefore, the using of multiple guide stars is a potential solution to increase the corrected field of view. Here we report an automatic selection algorithm of multiple guide stars and demonstrate the feasibility by implementing this method in the system of conjugate adaptive optical correction with multiple GSs. The simulation results indicate that compared with the case of the single guide star, high-resolution imaging can be obtained in most imaging areas with automatically selected 9 guide stars. Further, we can obtain optimally numbers and positions of the guide stars automatically and expect larger area aberrations. Therefore, this method has the great potential in in vivo deep tissue imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.