We report on a fiber optic sensor based on the physiological aspects of the eye and vision-related neural layers of the common housefly (Musca domestica) that has been developed and built for aerospace applications. The intent of the research is to reproduce select features from the fly’s vision system that are desirable in image processing, including high functionality in low-light and low-contrast environments, sensitivity to motion, compact size, lightweight, and low power and computation requirements. The fly uses a combination of overlapping photoreceptor responses that are well approximated by Gaussian distributions and neural superposition to detect image features, such as object motion, to a much higher degree than just the photoreceptor density would imply. The Gaussian overlap in the biomimetic sensor comes from the front-end optical design, and the neural superposition is accomplished by subsequently combining the signals using analog electronics. The fly eye sensor is being developed to perform real-time tracking of a target on a flexible aircraft wing experiencing bending and torsion loads during flight. We report on results of laboratory experiments using the fly eye sensor to sense a target moving across its field of view.
Reducing the environmental impact of aviation is a primary goal of NASA aeronautics research. One approach to
achieve this goal is to build lighter weight aircraft, which presents complex challenges due to a corresponding increase in
structural flexibility. Wing flexibility can adversely affect aircraft performance from the perspective of aerodynamic
efficiency and safety. Knowledge of the wing position during flight can aid active control methods designed to mitigate
problems due to increased wing flexibility. Current approaches to measuring wing deflection, including strain
measurement devices, accelerometers, or GPS solutions, and new technologies such as fiber optic strain sensors, have
limitations for their practical application to flexible aircraft control. Hence, it was proposed to use a bio-mimetic optical
sensor based on the fly-eye to track wing deflection in real-time. The fly-eye sensor has several advantages over
conventional sensors used for this application, including light weight, low power requirements, fast computation, and a
small form factor. This paper reports on the fly-eye sensor development and its application to real-time wing deflection
measurement.
Musca domestica, the common house fly, possesses a powerful vision system that exhibits features such as fast, analog,
parallel operation and motion hyperacuity -- the ability to detect the movement of objects at far better resolution than
predicted by their photoreceptor spacing. Researchers at the Wyoming Information, Signal Processing, and Robotics
(WISPR) Laboratory have investigated these features for over a decade to develop an analog sensor inspired by the fly.
Research efforts have been divided into electrophysiology; mathematical, optical and MATLAB based sensor modeling;
physical sensor development; and applications. This paper will provide an in depth review of recent key results in some
of these areas including development of a multiple, light adapting cartridge based sensor constructed on both a planar
and co-planar surface using off-the-shelf components. Both a photodiode-based approach and a fiber based sensor will
be discussed. Applications in UAV obstacle avoidance, long term building monitoring and autonomous robot navigation
are also discussed.
This paper is a revision of a paper presented at the SPIE conference on Medical Imaging 2005: Physiology, Function, and Structure from Medical Images, Feb. 2005, San Diego, California. The paper presented there appears (unrefereed) in SPIE Proceedings Vol. 5746.
Segmentation, or separating an image into distinct objects, is the key to creating 3-D renderings from serial slice images. This is typically a manual process requiring trained persons to tediously outline and isolate the objects in each image. We describe a template-based semiautomatic segmentation method to aid in the segmentation process and 3-D reconstruction of microscopic objects recorded with a confocal laser scanning microscope (CLSM). The simple and robust algorithm is based on the creation of a user-defined object template, followed by automatic segmentation of the object in each of the remaining image slices. The user guides the process by selecting the initial image slice for the object template, and labeling the object of interest. The algorithm is applied to mathematically defined shapes to verify the performance of the software. The algorithm is then applied to biological samples, including neurons in the common housefly. It is the quest to further understand the visual system of the housefly that provides the opportunity to develop this segmentation algorithm. Further application of this algorithm may extend to other registered and aligned serial section datasets with high contrast objects.
Since the mid-1980s, the development of a therapeutic, computer-assisted laser photocoagulation system to treat retinal disorders has progressed under the guidance of Dr. Welch, the Marion E. Forsman Centennial Professor of Engineering, Department of Biomedical Engineering, the University of Texas at Austin. This paper reviews the development of the system, related research in eye movement and laser-tissue interaction, and system implementation and testing. While subsets of these topics have been reported in prior publications, this paper brings the entire evolutionary design of the system together. We also discuss other recent "spinoff" uses of the system technology that have not been reported elsewhere and describe the impact of the latest technical advances on the overall system design.
Traditional imaging sensors for computer vision, such as CCD and CMOS arrays, have well-known limitations with regard to detecting objects that are very small in size (that is, a small object image compared to the pixel size), are viewed in a low contrast situation, are moving very fast (with respect to the sensor integration time), or are moving very small distances compared to the sensor pixel spacing. Any one or a combination of these situations can foil a traditional CCD or CMOS sensor array. Alternative sensor designs derived from biological vision systems promise better resolution and object detection in situations such as these. The patent-pending biomimetic vision sensor based on Musca domestica (the common house fly) is capable of reliable object rendition in spite of challenging movement and low contrast conditions. We discuss some interesting early results of comparing the biomimetic sensor to commercial CCD sensors in terms of contrast and motion sensitivity in situations such as those listed above.
Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.
Our understanding of the world around us is based primarily on three-dimensional information because of the environment in which we live and interact. Medical or biological image information is often collected in the form of two-dimensional, serial section images. As such, it is difficult for the observer to mentally reconstruct the three dimensional features of each object. Although many image rendering software packages allow for 3D views of the serial sections, they lack the ability to segment, or isolate different objects in the data set. Segmentation is the key to creating 3D renderings of distinct objects from serial slice images, like separate pieces to a puzzle. This paper describes a segmentation method for objects recorded with serial section images. The user defines threshold levels and object labels on a single image of the data set that are subsequently used to automatically segment each object in the remaining images of the same data set, while maintaining boundaries between contacting objects. The performance of the algorithm is verified using mathematically defined shapes. It is then applied to the visual neurons of the housefly, Musca domestica. Knowledge of the fly’s visual system may lead to improved machine visions systems. This effort has provided the impetus to develop this segmentation algorithm. The described segmentation method can be applied to any high contrast serial slice data set that is well aligned and registered. The medical field alone has many applications for rapid generation of 3D segmented models from MRI and other medical imaging modalities.
Two challenges to an effective, real-world computer vision system are speed and reliable object recognition. Traditional computer vision sensors such as CCD arrays take considerable time to transfer all the pixel values for each image frame to a processing unit. One way to bypass this bottleneck is to design a sensor front-end which uses a biologically-inspired analog, parallel design that offers preprocessing and adaptive circuitry that can produce edge maps in real-time. This biomimetic sensor is based on the eye of the common house fly (Musca domestica). Additionally, this sensor has demonstrated an impressive ability to detect objects at subpixel resolution. However, the format of the image information provided by such a sensor is not a traditional bitmap transfer of the image format and, therefore, requires novel computational manipulations to make best use of this sensor output. The real-world object recognition challenge is being addressed by using a subspace method which uses eigenspace object models created from multiple reference object appearances. In past work, the authors have successfully demonstrated image object recognition techniques for surveillance images of various military targets using such eigenspace appearance representations. This work, which was later extended to partially occluded objects, can be generalized to a wide variety of object recognition applications. The technique is based upon a large body of eigenspace research described elsewhere. Briefly described, the technique creates target models by collecting a set of target images and finding a set of eigenvectors that span the target image space. Once the eigenvectors are found, an eigenspace model (also called a subspace model) of the target is generated by projecting target images on to the eigenspace. New images to be recognized are then projected on to the eigenspace for object recognition. For occluded objects, we project the image on to reduced dimensional subspaces of the original eigenspace (i.e., a “subspace of a subspace” or a “sub-eigenspace”). We then measure how close a match we can achieve when the occluded target image is projected on to a given sub-eigenspace. We have found that this technique can result in significantly improved recognition of occluded objects. In order to manage the combinatorial “explosion” associated with selecting the number of subspaces required and then projecting images on to those sub-eigenspaces for measurement, we use a variation on the A* (called “A-star”) search method. The challenge of tying these two subsystems (the biomimetic sensor and the subspace object recognition module) together into a coherent and robust system is formidable. It requires specialized computational image and signal processing techniques that will be described in this paper, along with preliminary results. The authors believe that this approach will result in a fast, robust computer vision system suitable for the non-ideal real-world environment.
A system for robotically assisted retinal surgery has been developed to rapidly and safely place lesions on the retina for photocoagulation therapy. This system provides real- time, motion stabilized lesion placement for typical irradiation times of 100 ms. The system consists of three main subsystems: a global, digital-based tracking subsystem; a fast, local analog tracking subsystem; and a confocal reflectance subsystem to control lesion parameters dynamically. We have reported on these subsystems in previous SPIE presentations. This paper concentrates on the development of the second hybrid system prototype. Considerable progress has been made toward reducing the footprint of the optical system, simplifying the user interface, fully characterizing the analog tracking system and using measurable lesion reflectance growth parameters to develop a noninvasive method to infer lesion depth. This method will allow dynamic control of laser dosimetry to provide similar lesions across the non-uniform retinal surface. These system improvements and progress toward a clinically significant system are covered in detail within this paper.
Ocular motility generated by various fixation strategies show a lower propensity to visit laser damaged retinal areas as compared to non-laser damaged sites. This selectivity provides a non-invasive methodology for characterizing retinal pathology by mapping eye movement visitation under various visual function fixation strategies. Ocular motor techniques for imaging eye movement maps of normal and damaged retinal regions are demonstrated with reference to retinal target location at the retina. Eye movement data digitized from a contrast sensitivity task provided video data of eye movement fixation patterns simultaneously with retmal location oftarget placement dunng periods of visual fixation required in a Landolt ring contrast sensitivity task. These data were digitized with specialized algorithms that linked target location with retmal morphology and pathology In one patient with central macular retmal damage, retmal based maps demonstrated strong consistency with measurements made with nonretina1 higher resolution Purkinje eye movement apparatus. Such eye movement maps differed primarily eye movement density within a given area but were generally comparable with respect focal areas mapped in retinal space. These data suggest that lower resolution video based imaging can provide a non-invasive assessment oflaser induced retinal damage.
A system for robotically assisted retinal surgery has been developed to rapidly and safely place lesions on the retina for photocoagulation therapy. This system provides real- time, motion stabilized lesion placement for typical irradiation times of 100 ms. The system consists of three main subsystems: a global, digital-based tracking subsystem; a fast, local analog tracking subsystem in previous SPIE presentations. This paper concentrates on the development of the confocal reflectance subsystem and its integration into the overall photocoagulation system. Specifically, our goal was to use measurable lesion reflectance growth curve parameters to develop a noninvasive method to infer lesion depth. This method will allow dynamic control of laser dosimetry to provide similar lesions across the non-uniform retinal surface.
A new system for robotically assisted retinal surgery requires real-time signal processing of the reflectance signal from small targets on the retina. Laser photocoagulation is used extensively by ophthalmologists to treat retinal disorders such as diabetic retinopathy and retinal breaks. Currently, the procedure is performed manually and suffers from several drawbacks which a computer-assisted system could alleviate. Such a system is under development that will rapidly and safely place multiple therapeutic lesions at desired locations on the retina in a mater of seconds. This system provides real- time, motion-stabilized lesion placement for typical clinical irradiation times. A reflectance signal from a small target on the retina is used to derive high-speed tracking corrections to compensate for patient eye movement by adjusting the laser pointing angles. Another reflectance signal from a different small target on the retina is used to derive information to control the laser irradiation time which allows consistent lesion formation over any part of the retina. This paper describes the electro-optical system which dynamically measures the two reflectance signals, determines the appropriate reflectance parameters in real time, and controls laser pointing and irradiation time to meet the stated requirements.
Laser photocoagulation is used extensively by ophthalmologists to treat retinal disorders such as diabetic retinopathy and retinal breaks and tears. Currently, the procedure is performed manually and suffers from several drawbacks: it often requires many clinical visits, it is very tedious for both patient and physician, the laser pointing accuracy and safety margin are limited by a combination of the physician's manual dexterity and the patient's ability to hold their eye still, and there is a wide variability in retinal tissue absorption parameters. A computer-assisted hybrid system is under development that will rapidly and safely place multiple therapeutic lesions at desired locations on the retina in a matter of seconds. In the past, one of the main obstacles to such a system has been the ability to track the retina and compensate for any movement with sufficient speed during photocoagulation. Two different tracking modalities (digital image-based tracking and analog confocal tracking) were designed and tested in vivo on pigmented rabbits. These two systems are being seamlessly combined into a hybrid system which provides real-time, motion stabilized lesion placement for typical irradiation times (100 ms). This paper will detail the operation of the hybrid system and efforts toward controlling the depth of coagulation on the retinal surface.
KEYWORDS: Eye, Analog electronics, Retina, Confocal microscopy, Optical tracking, Reflectometry, Reflectivity, In vivo imaging, Laser coagulation, Argon ion lasers
We describe initial in vivo experimental results of a new hybrid digital and analog design for retinal tracking and laser beam control. An overview of the design is given. The results show in vivo tracking rates which exceed the equivalent of 38 degrees per second in the eye, with automated lesion pattern creation. Robotically-assisted laser surgery to treat conditions such as diabetic retinopathy and retinal breaks may soon be realized under clinical conditions with requisite safety using standard video hardware and inexpensive optical components based on this design.
Researchers at the USAF Academy and the University of Texas are developing a computer-assisted retinal photocoagulation system for the treatment of retinal disorders (i.e. diabetic retinopathy, retinal tears). Currently, ophthalmologists manually place therapeutic retinal lesions, an acquired technique that is tiring for both the patient and physician. The computer-assisted system under development can rapidly and safely place multiple therapeutic lesions at desired locations on the retina in a matter of seconds. Separate prototype subsystems have been developed to control lesion depth during irradiation and lesion placement to compensate for retinal movement. Both subsystems have been successfully demonstrated in vivo on pigmented rabbits using an argon continuous wave laser. Two different design approaches are being pursued to combine the capabilities of both subsystems: a digital imaging-based system and a hybrid analog-digital system. This paper will focus on progress with the digital imaging-based prototype system. A separate paper on the hybrid analog-digital system, `Hybrid Retinal Photocoagulation System', is also presented in this session.
The initial experimental results of a new hybrid digital and analog design for retinal tracking and laser beam control are described. The results demonstrate tracking rates that exceed the equivalent of 60 deg per second in the eye, with automatic creation of lesion patterns and robust loss of lock detection. Robotically assisted laser surgery to treat conditions such as diabetic retinopathy and retinal tears can soon be realized under clinical conditions with requisite safety using standard video hardware and inexpensive optical components.
Successful retinal tracking subsystem testing results in vivo on rhesus monkeys using an argon continuous wave laser and an ultra-short pulse laser are presented. Progress on developing an integrated robotic retinal laser surgery system is also presented. Several interesting areas of study have developed: (1) 'doughnut' shaped lesions that occur under certain combinations of laser power, spot size, and irradiation time complicating measurements of central lesion reflectance, (2) the optimal retinal field of view to achieve simultaneous tracking and lesion parameter control, and (3) a fully digital versus a hybrid analog/digital tracker using confocal reflectometry integrated system implementation. These areas are investigated in detail in this paper. The hybrid system warrants a separate presentation and appears in another paper at this conference.
We describe initial experimental results of a new hybrid digital and analog design for retinal tracking and laser beam control. Initial results demonstrate tracking rates which exceed the equivalent of 50 degrees per second in the eye, with automatic lesion pattern creation and robust loss of lock detection. Robotically assisted laser surgery to treat conditions such as diabetic retinopathy, macular degeneration, and retinal tears can now be realized under clinical conditions with requisite safety using standard video hardware and inexpensive optical components.
Researchers at the University of Texas at Austin's Biomedical Engineering Laser Laboratory and the U. S. Air Force Academy’s Department of Electrical Engineering are developing a computer-assisted prototype retinal photocoagulation system. The project goal is to rapidly and precisely automatically place laser lesions in the
retina for the treatment of disorders such as diabetic retinopathy and retinal tears while dynamically controlling the extent of the lesion. Separate prototype subsystems have been developed to control lesion parameters (diameter or depth) using lesion reflectance feedback and lesion placement using retinal vessels as tracking landmarks. Successful subsystem testing results in vivo on pigmented rabbits using an argon continuous wave
laser are presented. A prototype integrated system design to simultaneously control lesion parameters and
placement at clinically significant speeds is provided.
KEYWORDS: Computing systems, Retina, Prototyping, Argon ion lasers, Control systems, Frame grabbers, Laser systems engineering, Laser applications, Reflectivity, Camera shutters
Researchers at the University of Texas at Austin's Biomedical Engineering Laser Laboratory investigating the medical applications of lasers have worked toward the development of a retinal robotic laser system. The ultimate goal of this ongoing project is to precisely place and control the depth of laser lesions for the treatment of various retinal diseases such as diabetic retinopathy and retinal tears. Researchers at the USAF Academy's Department of Electrical Engineering have also become involved with this research due to similar interests. Separate low speed prototype subsystems have been developed to control lesion depth using lesion reflectance feedback parameters and lesion placement using retinal vessels as tracking landmarks. Both subsystems have been successfully demonstrated in vivo on pigmented rabbits using an argon continuous wave laser. Work is ongoing to build a prototype system to simultaneously control lesion depth and placement. The instrumentation aspects of the prototype subsystems were presented at SPIE Conference 1877 in January 1993. Since then our efforts have concentrated on combining the lesion depth control subsystem and the lesion placement subsystem into a single prototype capable of simultaneously controlling both parameters. We have designed this combined system CALOSOS for Computer Aided Laser Optics System for Ophthalmic Surgery. An initial CALOSOS prototype design is provided. We have also investigated methods to improve system response time. The use of high speed non-standard frame rate CCD cameras and high speed local bus frame grabbers hosted on personal computers are being investigated. A review of system testing in vivo to date is provided in SPIE Conference proceedings 2374-49 (Novel Applications of Lasers and Pulsed Power, Dual-Use Applications of Lasers: Medical session).
KEYWORDS: Prototyping, In vivo imaging, Retina, Pulsed laser operation, Argon ion lasers, Control systems, Laser systems engineering, Laser tissue interaction, Computing systems, Laser applications
Researchers at the University of Texas at Austin's Biomedical Engineering Laser Laboratory investigating the medical applications of lasers have worked toward the development of a retinal robotic laser system. The overall goal of the ongoing project is to precisely place and control the depth of laser lesions for the treatment of various retinal diseases such as diabetic retinopathy and retinal tears. Researchers at the USAF Academy's Department of Electrical Engineering and the Optical Radiation Division of Armstrong Laboratory have also become involved with this research due to similar related interests. Separate low speed prototype subsystems have been developed to control lesion depth using lesion reflectance feedback parameters and lesion placement using retinal vessels as tracking landmarks. Both subsystems have been successfully demonstrated in vivo on pigmented rabbits using an argon continuous wave laser. Work is ongoing to build a prototype system to simultaneously control lesion depth and placement. Following the dual-use concept, this system is being adapted for clinical use as a retinal treatment system as well as a research tool for military laser-tissue interaction studies. Specifically, the system is being adapted for use with an ultra-short pulse laser system at Armstrong Laboratory and Frank J. Seiler Research Laboratory to study the effects of ultra-short laser pulses on the human retina. The instrumentation aspects of the prototype subsystems were presented at SPIE Conference 1877 in January 1993. Since then our efforts have concentrated on combining the lesion depth control subsystem and the lesion placement subsystem into a single prototype capable of simultaneously controlling both parameters. We have designated this combined system CALOSOS for Computer Aided Laser Optics System for Ophthalmic Surgery. We have also investigated methods to improve system response time. Use of high speed nonstandard frame rate CCD cameras and high speed frame grabbers hosted on personal computers featuring the 32 bit, 33 MHz PCI bus have been investigated. Design details of an initial CALOSOS prototype design is provided in SPIE Conference proceedings 2396B-32 (Biomedical Optics Conference, Clinical Laser Delivery and Robotics Session). This paper will review in vivo testing to date and detail planned system upgrades.
Laser induced retinal lesions are used to treat a variety of eye diseases. The size and location of these retinal lesions are critical for effective treatment and minimal complications. An automated system is under development for retinal photocoagulation to improve the accuracy of this treatment. Separate instrumentation systems have been developed to monitor and control lesion growth in real time to compensate for tissue inhomogeneity, and to track and compensate for retinal movement during irradiation. A real time lesion feedback control system is implemented on a UNIX based workstation. A CCD camera (30 frames/second) and coagulating laser are coaxially aligned such that images of the lesion can be acquired during laser irradiation. Parameters of these reflectance images are extracted by an image processor in real time and when certain preset thresholds are exceeded, the laser is shut off. The camera and laser legs are alternately shuttered during irradiation by a high speed spinning wheel to prevent the reflected light from the laser from interfering with the reflectance signal. This system is coupled to a fundus camera for delivery to the eye.
Laser induced retinal lesions are used to treat a variety of eye diseases such as diabetic retinopathy and retinal detachment. In this treatment, an argon laser beam is directed into the eye through the pupil onto the fundus where the heat resulting from the absorbed laser light coagulates the retinal tissue. This thermally damaged region is highly scattering and appears as a white disk. The size of the retinal lesions is critical for effective treatment and minimal complications. A real time feedback control system is implemented that monitors lesion growth using two-dimensional reflectance images acquired by a CCD camera. The camera views the lesion formation on axis with the coagulating laser beam. The reflectance images are acquired and processed as the lesion forms. When parameters of the reflectance images that are correlated to lesion dimensions meet certain preset thresholds, the laser is shuttered. Results of feedback controlled lesions formed in vivo in pigmented rabbits are presented. An ability to produce uniform lesions despite variation in the tissue absorption or changes in laser power is demonstrated. This lesion control system forms part of a larger automated system for retinal photocoagulation.
Laser induced retinal lesions are used to treat a variety of eye diseases such as diabetic retinopathy and retinal detachment. An instrumentation system has been developed to track a specific lesion coordinate on the retinal surface and provide corrective signals to maintain laser position on the coordinate. High resolution retinal images are acquired via a CCD camera coupled to a fundus camera and video frame grabber. Optical filtering and histogram modification are used to enhance the retinal vessel network against the lighter retinal background. Six distinct retinal landmarks are tracked on the high contrast image obtained from the frame grabber using two-dimensional blood vessel templates. The frame grabber is hosted on a 486 PC. The PC performs correction signal calculations using an exhaustive search on selected image portions. An X and Y laser correction signal is derived from the landmark tracking information and provided to a pair of galvanometer steered mirrors via a data acquisition and control subsystem. This subsystem also responds to patient inputs and the system monitoring lesion growth. This paper begins with an overview of the robotic laser system design followed by implementation and testing of a development system for proof of concept. The paper concludes with specifications for a real time system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.