The primary means by which air traffic tower controllers obtain information is through direct out-thewindow
viewing, although a considerable amount of time is spent looking at electronic displays and other
information sources inside the tower cab. The Air Force Research Laboratory sponsored the development
of a prototype Augmented Reality Binocular System (ARBS) that enhances tower controller performance,
situation awareness, and safety. The ARBS is composed of a virtual binocular (VB) that displays real-time
imagery from high resolution telephoto cameras and sensors mounted on pan/tilt units (PTUs). The selected
PTU tracks to the movement of the VB, which has an inertial heading and elevation sensor. Relevant
airfield situation text and graphic depictions that identify airfield features are overlaid on the imagery. In
addition, the display is capable of labeling and tracking vehicles on which an Automatic Dependent
Surveillance - Broadcast (ADS-B) system has been installed. The ARBS provides air traffic controllers and
airfield security forces with the capability to orient toward, observe, and conduct continuous airfield
operations and surveillance/security missions from any number of viewing aspects in limited visibility
conditions. In this paper, we describe the ARBS in detail, discuss the results of a Usability Test of the
prototype ARBS, and discuss ideas for follow-on efforts to develop the ARBS to a fieldable level.
Tower controllers are responsible for maintaining safe separation between airborne aircraft in the airport traffic control area, and separation between aircraft, equipment, and personnel on the airport surface. The objective of this project was to develop and demonstrate an out-the-window, augmented viewing system concept for Air Force air traffic control tower personnel to reduce look-down time within the tower and to optimize visual airfield operations, particularly during limited visibility conditions. We characterized controller tasks where a near-to-eye display greatly aids performance and identified form factor variables that influence user acceptability of hardware configurations. We developed an "out-the-window concept of operation" and analyzed the hardware requirements and feasibility of three near-to-eye viewing systems: two head-mounted monocular displays (HMMD) and a held-to-head binocular display (HHBD). When fully developed, these display prototypes should enhance tower controller situation awareness, and reduce such distractions as having to frequently attend to and respond to head-down (console) display information. There are potential users of this display concept in all branches of the military services, and in the commercial sector. There is also potential utility for surface surveillance operations in support of homeland security, law enforcement personnel, rescue workers, firefighters, and special operations forces in non-aviation applications.
Tower controllers are responsible for maintaining separation between aircraft and expediting the flow of traffic in the air. On the airport surface, they also are responsible for maintaining safe separation between aircraft, ground equipment, and personnel. They do this by sequencing departing and arriving aircraft, and controlling the location and movement of aircraft, vehicles, equipment, and personnel on the airport surface. The local controller and ground controller are responsible for determining aircraft location and intent, and for ensuring that aircraft, vehicles, and other surface objects maintain a safe separation distance. During nighttime or poor visibility conditions, controllers' situation awareness is significantly degraded, resulting in lower safety margins and increased errors. Safety and throughput can be increased by using an Enhanced Vision System, based upon state-of-the-art infrared sensor technology, to restore critical visual cues. We discuss the results of an analysis of tower controller critical visual tasks and information requirements. The analysis identified: representative classes of ground obstacles/targets (e.g., aircraft, vehicles, wildlife); sample airport layouts and tower-to-runway distances; and obstacle subtended visual angles. We performed NVTherm modeling of candidate sensors and field data collections. This resulted in the identification of design factors for an airport surface surveillance Enhanced Vision System.
Night vision goggles (NVGs) can enhance military and civilian operations at night. With this increased capability comes the requirement to provide suitable training. Results from field experience and accident analyses suggest that problems experienced by NVG users can be attributed to a limited understanding of NVG limitations and to perceptual problems. In addition, there is evidence that NVG skills are perishable and require frequent practice. Format training is available to help users obtain the required knowledge and skills. However, there often is insufficient opportunity to obtain and practice perceptual skills prior to using NVGs in the operational environment. NVG users need early and continued exposure to the night environment across a broad range of visual and operational conditions to develop and maintain the necessary knowledge and perceptual skills. NVG training has consisted of classroom instruction, hands-on training, and simulator training. Advances in computer-based training (CBT) and web-based training (WBT) have made these technologies very appealing as additions to the NVG training mix. This paper discusses our efforts to develop NVG training using multimedia, interactive CBT and WBT for NVG training. We discuss how NVG CBT and WBT can be extended to military and civilian ground, maritime, and aviation NVG training.
Results from field experiments and accident data analyses suggest that the majority of the problems experienced by military drivers using I2 devices, such as night vision goggles (NVGs) can be attributed to a limited understanding of their capabilities and limitations and to perceptual problems. In addition, there is evidence that skills for using NVDs for driving are highly perishable and require frequent practice for sustainment. At the present time there is little formal training available to help drivers obtain the required knowledge and skills and little opportunity to obtain and practice perceptual skills with representative imagery and scenarios prior to driving in the operational environment. The Night Driving Training Aid (NDTA) was developed for the U.S. Army to address this training deficiency. We previously reported interim results of our work to identify and validate training requirements, to develop instructional materials and customized instructional software, and to deliver the instruction in a multimedia, interactive PC environment. In this paper we focus on describing and illustrating the features and capabilities of the final prototype NDTA. In addition, we discuss technical and training issues addressed and lessons learned for developing a low cost, effective PC-based night driving training aid.
The use of night vision devices (NVDs) has the potential for enhancing driving operations at night by allowing increased mobility and safer operations. However, with this increased capability has come the requirement to manage risks and provide suitable training. Results from field experiments and accident analyses suggest that problems experienced by drivers with NVDs can be attributed to a limited understanding of the NVD capabilities and limitations and to perceptual problems. There is little formal training available to help drivers obtain the required knowledge and skills and little opportunity to obtain and practice perceptual skills prior to driving in the operational environment. NVD users need early and continued exposure to the night environment across a broad range of visual conditions to develop and maintain the necessary perceptual skills. This paper discusses the interim results of a project to develop a Night Driving Training Aid (NDTA) for driving with image intensification (I2) devices. The paper summarizes work to validate requirements, develop instructional materials and software, and deliver the instruction in a multimedia, interactive PC environment. In addition, we discuss issues and lessons learned for training NVD driving knowledge and skills in a PC environment and extending the NDTA to thermal NVDs.
The Driver's Vision Enhancer (DVE) is a thermal sensor and display combination currently being procured for use in U.S. Army combat and tactical wheeled vehicles. During the DVE production process, a given number of sensor or display pixels may either vary from the desired luminance values (nonuniform) or be inactive (nonresponsive). The amount and distribution of pixel luminance nonuniformity (NU) and nonresponsivity (NR) allowable in production DVEs is a significant cost factor. No driver performance-based criteria exist for determining the maximum amount of allowable NU and NR. For safety reasons, these characteristics are specified conservatively. This paper describes an experiment to assess the effects of different levels of display NU and NR on Army drivers' ability to identify scene features and obstacles using a simulated DVE display and videotaped driving scenarios. Baseline, NU, and NR display conditions were simulated using real-time image processing techniques and a computer graphics workstation. The results indicate that there is a small, but statistically insignificant decrease in identification performance with the NU conditions tested. The pattern of the performance-based results is consistent with drivers' subjective assessments of display adequacy. The implications of the results for specifying NU and NR criteria for the DVE display are discussed.
The use of night vision devices (NVDs) by US Army foot soldiers, aviator,s and drivers of combat and tactical wheeled vehicles has enhanced operations at night by allowing increased mobility and potentially safer operations. With this increased capability in the night environment has come an increased exposure to the hazards of that environment and the risks that the command structure must manage and balance with mission requirements. Numerous vehicular accidents have occurred during night filed exercises involving drivers wearing image intensification (I2) systems. These accidents can frequently be attributed to perceptual problems experienced by the drivers. Performance with NVDs generally increases with practice and experience. However, there is little formal training provided in night driving skills and few opportunities to practice these skills under realistic conditions. This paper reports the approach and preliminary result of an effort to define and demonstrate a low-cost night driving simulator concept for training night driving skills with I2 devices and to identify and evaluate the techniques and resources that are available for implementing this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.