KEYWORDS: Magnetic sensors, Fractal analysis, Sensors, Surgery, Magnetism, Human-computer interaction, Signal processing, Mathematical modeling, Data conversion, Computing systems
In this paper, we present preliminary work on a novel wearable joystick for gloves-on human/computer interaction
in hazardous environments. Interacting with traditional input devices can be clumsy and inconvenient for the
operator in hazardous environments due to the bulkiness of multiple system components and troublesome wires.
During a collapsed structure search, for example, protective clothing, uneven footing, and "snag" points in
the environment can render traditional input devices impractical. Wearable computing has been studied by
various researchers to increase the portability of devices and to improve the proprioceptive sense of the wearer's
intentions. Specifically, glove-like input devices to recognize hand gestures have been developed for general-purpose
applications. But, regardless of their performance, prior gloves have been fragile and cumbersome to
use in rough environments. In this paper, we present a new wearable joystick to remove the wires from a simple,
two-degree of freedom glove interface. Thus, we develop a wearable joystick that is low cost, durable and robust,
and wire-free at the glove. In order to evaluate the wearable joystick, we take into consideration two metrics
during operator tests of a commercial robot: task completion time and path tortuosity. We employ fractal
analysis to measure path tortuosity. Preliminary user test results are presented that compare the performance
of both a wearable joystick and a traditional joystick.
Much work has been undertaken recently toward the development of low-power, high-performance sensor networks. There are many static remote sensing applications for which this is appropriate. The focus of this development effort is applications that require higher performance computation, but still involve severe constraints on power and other resources. Toward that end, we are developing a reconfigurable computing platform for miniature robotic and human-deployed sensor systems composed of several mobile nodes. The system provides static and dynamic reconfigurability for both software and hardware by the combination of CPU (central processing unit) and FPGA (field-programmable gate array) allowing on-the-fly reprogrammability. Static reconfigurability of the hardware manifests itself in the form of a "morphing bus" architecture that permits the modular connection of various sensors with no bus interface logic. Dynamic hardware reconfigurability provides for the reallocation of hardware resources at run-time as the mobile, resource-constrained nodes encounter unknown environmental conditions that render various sensors ineffective. This computing platform will be described in the context of work on chemical/biological/radiological plume tracking using a distributed team of mobile sensors. The objective for a dispersed team of ground and/or aerial autonomous vehicles (or hand-carried sensors) is to acquire measurements of the concentration of the chemical agent from optimal locations and estimate its source and spread. This requires appropriate distribution, coordination and communication within the team members across a potentially unknown environment. The key problem is to determine the parameters of the distribution of the harmful agent so as to use these values for determining its source and predicting its spread. The accuracy and convergence rate of this estimation process depend not only on the number and accuracy of the sensor measurements but also on their spatial distribution over time (the sampling strategy). For the safety of a human-deployed distribution of sensors, optimized trajectories to minimize human exposure are also of importance.
The systems described in this paper are currently being developed. Parts of the system are already in existence and some results from these are described.
A system of launchable miniature mobile robots with various sensors as payload is used for distributed sensing. The robots are projected to areas of interest either by a robot launcher or by a human operator using standard equipment. A wireless communication network is used to exchange information with the robots. Payloads such as a MEMS sensor for vibration detection, a microphone and an active video module are used mainly to detect humans. The video camera provides live images through a wireless video transmitter and a pan-tilt mechanism expands the effective field of view. There are strict restrictions on total volume and power consumption of the payloads due to the small size of the robot. Emerging technologies are used to address these restrictions. In this paper, we describe the use of microrobotic technologies to develop active vision modules for the mesoscale robot. A single chip CMOS video sensor is used along with a miniature lens that is approximately the size of a sugar cube. The device consumes 100 mW; about 5 times less than the power consumption of a comparable CCD camera. Miniature gearmotors 3 mm in diameter are used to drive the pan-tilt mechanism. A miniature video transmitter is used to transmit analog video signals from the camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.