The Air Force Civil Engineer Center’s C-17 Load Cart is a large, 150-ton machine based on a modified Caterpillar 621G scraper for testing experimental pavements used in airfield surface construction and repair. Long lasting, durable, preparein- place, minimally resourced pavements represent a critical technology for airfield damage repair, especially in expeditionary settings, and formulations must be tested using realistic loads but without the expense and logistical challenges of using real aircraft. The Load Cart is an articulated vehicle consisting of the 621G tractor and a custom trailer carrying a weighted set of landing gear to simulate the loads exerted during aircraft landing and taxiing. During the test a human driver repetitively traffics the vehicle hundreds of times over an experimental patch of pavement, following an intricate trafficking pattern, to evaluate wear and mechanical properties of the pavement formulation. The job of driving the Load Cart is dull, repetitive, and prone to errors and systematic variation depending on the individual driver. This paper describes the full-stack development of an autonomy kit for the Load Cart, to enable repeatable testing without a driver. Open-source code (Robot Operating System), commercial-off-the-shelf sensors, and a modular design based on open standards are exploited to achieve autonomous operation without the use of GNSS (which is challenged by operation inside a metal test building). The Vehicle Control Unit is a custom interface in PC-104 form factor allowing actuation of the Load Cart via CAN J1939. Operational modes include manual, tele-operation, and autonomous.
We present results from testing a multi-modal sensor system (consisting of a camera, LiDAR, and positioning system) for real-time object detection and geolocation. The system’s eventual purpose is to assess damage and detect foreign objects on a disrupted airfield surface to reestablish a minimum airfield operating surface. It uses an AI to detect objects and generate bounding boxes or segmentation masks in data acquired with a high-resolution area scan camera. It locates the detections in the local, sensor-centric coordinate system in real time using returns from a low-cost commercial LiDAR system. This is accomplished via an intrinsic camera calibration together with a 3D extrinsic calibration of the camera- LiDAR pair. A coordinate transform service uses data from a navigation system (comprising an inertial measurement unit and global positioning system) to transform local coordinates of the detections obtained with the AI and calibrated sensor pair into earth-centered coordinates. The entire sensor system is mounted on a pan-tilt unit to achieve 360-degree perception. All data acquisition and computation are performed on a low SWAP-C system-on-module that includes an integrated GPU. Computer vision code runs real-time on the GPU and has been accelerated using CUDA. We have chosen Robot Operating System (ROS1 at present but porting to ROS2 in the near term) as the control framework for the system. All computer vision, motion, and transform services are configured as ROS nodes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.