Special Section on Perceptually Driven Visual Information Analysis

Automatic online vision-based inspection system of coupler yoke for freight trains

[+] Author Affiliations
Chao Zheng, Zhenzhong Wei

Beihang University, School of Instrumentation Science and Optoelectronics Engineering, Beijing 100191, China

Beihang University, Key Laboratory of Precision Opto-mechatronics Technology, Beijing 100191, China

J. Electron. Imaging. 25(6), 061602 (Jul 18, 2016). doi:10.1117/1.JEI.25.6.061602
History: Received March 24, 2016; Accepted June 30, 2016
Text Size: A A A

Open Access Open Access

Abstract.  Fault inspection plays an important role in ensuring the safe operation of freight trains. With the development of computer vision technology, the vision-based fault inspection has become one of the principal means of fault inspection. A coupler yoke is an important component of the train’s connection system, and faults in this system would cause the separation of the train, leading to a serious accident. We propose an automatic image inspection system to inspect for faults in coupler yokes during the running of a freight train. The inspection process is divided into two parts: the localization part and the recognition part. In the localization part, we propose multiple dimension features, design a fast algorithm to compute multiresolution image features, and use a linear support vector machine classifier to locate the position of the coupler yoke. In the recognition part, we propose a fast decision tree training method by prepruning noneffective features, and use Adaboost decision trees as the final fault classifier. Experimental results show that this proposed method can achieve a fault inspection rate of 98.6% while the average processing time of an image is about 98 ms, which shows our system has a high inspection accuracy and a good real-time performance.

Figures in this Article

Every day in China, tens of thousands of freight trains are running on more than 90,000 km of railway.1 Freight train inspection, which focuses on evaluating the real-time condition of train components, such as coupler yoke bolts, angle cocks, and dust collectors, is a critical task for ensuring the safety of railway traffic. However, most of the inspection work is still handled by inspectors and machinists today in China. Since people are subjective and each person’s professional qualities are different, it is easy to make mistakes in the analysis of a component’s state, increase the fault inspection time, and influence the railway transport efficiency.2 With the development of computer vision technology, a variety of visual-based inspection techniques have been used to replace old, manual ones.3 Many researchers have proposed design varieties of systems to get images of different train components, and have also proposed many methods to automatically deal with the images.4 In this paper, we establish a visual-based inspection system to detect faults in the coupler yoke part of freight trains, and install it in many railway stations to test its performance.

The coupler and draft gear device is an important component of the train’s connection system, which is mainly composed of two parts, the coupler and the draft gear. The coupler and the draft gear are connected together by the bolt of the coupler yoke. If the coupler yoke bolt is missing, the coupler will separate from the draft gear, which can cause the separation of the train cars, leading to serious accidents (see Fig. 1). To ensure the connection between the coupler and draft gear, the train company needs to have many experienced inspectors to check the coupler yoke bolt manually and regularly. However, this kind of inspection approach is extremely time-consuming and labor-intensive. It also leads to high maintenance costs and limits the progress of the freight trains to higher speeds. Therefore, there is considerable interest in replacing the manual inspection process by automatic inspection systems using advanced computer vision technologies.

Graphic Jump Location
Fig. 1
F1 :

An incident of the separation in a train body caused by a fault in the coupler yoke.

The key to solving this problem is to get the image of coupler yoke and determine whether there is a fault in it using pattern recognition methods. Therefore, we propose an automatic vision-based image acquisition and fault inspection system. We install five high-speed cameras to simultaneously capture images of moving trains, and the inspection process is divided into two main parts: the localization part and the recognition part, as shown in Fig. 2.

Graphic Jump Location
Fig. 2
F2 :

Block diagram illustrating the functional parts in the proposed inspection system.

The cameras are installed outdoors and the appearance of the coupler yoke may change significantly due to various environmental conditions. So it is hard to find an existing mature feature to represent the coupler yoke’s appearance. To better locate the coupler yoke in the localization part, we present a feature called multiple dimension features (MDF). MDF contains seven dimensions of features: normalized gradient magnitude and histogram of oriented gradients on six orientations. To deal with the change in scale of the coupler yoke in the image, we also design a fast algorithm to compute multiscale image features, which can produce an MDF feature pyramid much faster than a fine-sampled pyramid. After the features have been extracted, we then construct a hyperplane classifier to realize coupler yoke localization. In detail, by using a support vector machine (SVM) classifier,5 we can map the feature vectors of the input training set to a high-dimensional space. In this space, we perform a linear classification and determine an optimal hyperplane to minimize the cost of misclassification between the coupler yoke and the background. Then the classifier finishes the location of the coupler yoke.

In the recognition part, we choose Adaboost decision trees as the classifier to recognize the missing bolt of the coupler yoke. Adaboost is one of the most popular learning techniques in use today, combining many weak learners to form a single strong one.6 While exhibiting fast speeds at test time, the training process of Adaboost decision trees is relatively slow. To overcome this drawback, we propose a fast decision trees training method by prepruning noneffective features. Our method exploits the bound of the error of each decision tree node to prune ineffectual features in the early stage of training, which produces a great increase in the speed of classifier training while maintaining an identical performance. Finally, a well-trained Adaboost decision trees classifier will distinguish the absent bolts from the present ones, and a coupler yoke fault can be accurately detected. The main contributions of this paper are as follows.

  1. We propose an automatic vision-based system to detect coupler yoke bolt faults on freight trains and it is shown to be an ideal substitute for traditional manual inspection. The detection efficiency is enhanced and the maintenance costs are significantly reduced.
  2. To deal with the significant change of the appearance of coupler yoke bolts caused by various environmental conditions, we propose a feature called MDF. MDF contains seven dimensions of features: normalized gradient magnitude and histogram of oriented gradients on six orientations, which can represent effective features and outperform other common features on this task.
  3. To deal with the change in scale of the coupler yoke in the image, we present a fast method to compute multiresolution image features. Using this approach, we can get an MDF feature pyramid much faster than a fine-sampled pyramid, without sacrificing performance.
  4. To overcome the drawback that the training process of Adaboost decision trees is slow, we propose a fast decision trees training method by prepruning noneffective features. Our method can offer a great speedup in classifier training while maintaining an identical performance.

As a railway safety system, inspection reliability is one of the most important aspects. We divide the inspection process into two parts and adopt a whole-to-part hierarchical inspection framework to establish a highly stable coupler yoke inspection system.

In the transportation field, the fault diagnostic and detection methods used can be divided into two main categories: visual and nonvisual methods. One of the current most successful nonvisual methods is undoubtedly the acoustics-based fault diagnosis approach for roller bearings.7 However, acoustics has its own limits and other types of faults cannot be detected by this trackside acoustic detection system. In comparison, vision-based methods are able to detect a larger range of faults, and with the development of computer vision technology, more and more vision-based fault inspection systems have been developed.

In the beginning, researchers focused on detecting faults of railway tracks. Marino et al.8 proposed a visual inspection system to automatically detect the presence/absence of the fastening bolts that fix the rails to the sleepers. They employed wavelet transformation and principal component analysis (PCA) to preprocess railway images. The converted data were simultaneously processed according to two discrete wavelet transforms, then passed on to two multilayer perceptron (MLP) neural classifiers to finish the inspection. De Ruvo et al.9 applied the error back propagation algorithm to model hexagonal-headed bolts. To achieve real-time performance, they implemented the detection algorithm on graphic processing units (GPU).

As vision-based inspection systems become more and more common, researchers try to use them to detect faults of different components in freight trains. Hart et al.10 used multispectral machine vision for monitoring the health of rolling stock and locomotive traction motors. Kim et al.11 used an image analysis module to accurately measure three specifications: the thickness, any unbalanced wear on the brake shoes, and the distances between the brake shoes and the wheels. Trouble of freight car detection system (TFDS) is a vision-based fault detection system for the key components of running freight trains in China. Using images taken by TFDS, Zhou et al.12 combined a gradient encoding histogram feature and an SVM classifier to inspect for missing handles on an angle cock. Li et al.13 proposed an automatic recognition method for the loss of a train bogie center plate bolt. Liu et al.14 used Hough transform and symmetry validation to detect displacement faults of the bearing weight saddle. Zhu et al.15 proposed a novel approach to extract complex shapes from TFDS images based on discrete-point sampling and centreline grouping, but they did not demonstrate further use of their method for fault detection. As far as we know, we have not found any literature that discussed the visual inspection of the coupler yoke bolts. We believe our method is the first to focus on detecting this fault.

Visual methods are based on images, so the first step for all vision-based inspection systems is to acquire them. To detect faults of coupler yokes, we need to obtain dynamic images of the coupler yokes. This can be realized by an inspecting system which is widely used in Chinese railways and is called the “Trouble of Freight Car Detection System.” It is a particular application in which an online inspection of the train’s key components is performed to prevent dangerous situations. The TFDS consists of three major modules: the dynamic image gathering module to take images, the data transfer station to control and transfer images, and the image recognition module to finish the inspection.

As shown in Fig. 3, five high-speed cameras are installed to inspect different parts of the freight trains online. Two are installed on the side of the tracks to acquire the profile images of the freight trains. The others are installed in the center of the railway to acquire the bottom images. When a train runs through, the magnets will generate electric signals and send them to the data transfer station. Then, the control part of the transfer station opens the LED light sources and camera protection gate. These five cameras begin to simultaneously capture dynamic images of the key components of the moving train. Finally, the data transfer station transmits images to the remote monitoring server as the input of the recognition module, where they are analyzed to detect faults of the train components.

Graphic Jump Location
Fig. 3
F3 :

The sketch of the TFDS system.

There are 2500 to 3000 images taken of one freight train, depending on the number of carriages that it drives. Each image is 1024×1200  pixels, 24-bit color, and jpg format. Fifty-nine part images are captured for each freight carriage, 45 images of which are captured by the bottom cameras, and the last 14 images are captured by the trackside cameras. As our research focuses on automatically detecting coupler yoke bolt faults, the images captured by the bottom cameras are used in this paper. Acquired images of coupler yokes are shown in Fig. 4, where the object inside the white rectangle is the coupler yoke. The following three requirements render this coupler yoke bolt inspection a challenging problem.

  1. TFDS is designed to acquire images of trains with speed between 10 and 160  km/h. When the speed of freight train is too high, the images will be blurry. For this dynamic inspection, low resolution, blurred images, and the uncertainty of the environment are difficulties that must be considered.
  2. Cameras are installed outdoors, so the appearance of the coupler yoke bolt will change due to various illumination conditions. Additionally, the coupler yoke is sometimes polluted by leaked grease or dust, so it is a challenging task to localize the coupler yoke in the image.
  3. Difficulties also stem from online detection. Generally, the shortest interval between a freight train and the next is 5 min, which produces high requirements for the reliability and speed of the inspection algorithm.

Graphic Jump Location
Fig. 4
F4 :

The coupler yoke images acquired by the TFDS system: (a) sample image the coupler yoke is installed on the left, (b) sample image the coupler yoke is installed on the right.

The inspection process can be mainly divided into two parts, the location part and the inspection part, as shown in Fig. 2. We need to first locate the coupler yoke in the image, and then extract its image patch into the inspection part to determine whether there is a fault. In this section, we focus on the localization of the coupler yoke, corresponding to the red rectangle in the flowchart of Fig. 2.

Multiple Dimension Features

The location is usually realized through object detection methods. One of the most successful approaches for object detection is the sliding window paradigm.16 However, the performance of object detection systems is determined by the feature representation. In the development process of computer vision many well known features have been proposed: Haar features for faces,17 HOG features for pedestrians,18 DPM features for volatile objects,19 and so on but researchers have never stopped proposing new features,20 because there is no kind of feature that can address all the problems successfully in every application.21 So in our inspection system, we need to find features that meet our requirements. As the cameras are installed outdoors, the appearance of the coupler yoke may change significantly due to various environmental conditions. We have failed to find an existing feature to represent the coupler yoke’s appearance, so we have to make up a new kind of feature. We present a feature called MDF.

MDF contains seven dimensions of features: normalized gradient magnitude and histogram of gradients on six orientations. It is a combination of two different types of features, both of which are translationally invariant.

Gradient magnitude

A straightforward approach for generating features that capture different aspects of an image is through the use of gradient magnitude. It is a translationally invariant nonlinear image transformation which captures unoriented edge strength. It is computed using isotropic differential operators without angular favor, so it reflects the maximum intensity variation regardless of orientation. Edge and structure are important characters of the coupler yoke, and the gradient magnitude feature measures the strength of local luminance change, which can be regarded as an image edge profile. Thus, it is a proper feature to represent the appearance of the coupler yoke. Denote I by an image. Its GM map can be computed as Display Formula

GI=[Ihx]2+[Ihy]2,(1)
where is the linear convolution operator and hd, d{x,y} is the Gaussian partial derivative filter applied along the horizontal (x) or vertical (y) direction Display Formula
hd(x,y|σ)=dg(x,y|σ)=12πσ2dσ2exp(x2+y22σ2),(2)
where g(x,y|σ)=12πσ2exp(x2+y22σ2) is the isotropic Gaussian function with scale parameter σ.

Histogram of gradient orientations

A histogram of gradient orientations is a weighted histogram where the bin index is determined by gradient orientation. By using the Gradient Magnitude feature above, we can get the structure of the coupler yoke, but the structure information alone is not enough to detect the component accurately; we also need to use the gradient orientation information. The local object appearance and shape can often be characterized rather well by the distribution of the local intensity gradients or edge directions, even without precise knowledge of the corresponding gradient or edge positions. The orientation analysis is robust to lighting changes since the histogram gives translational invariance. This histogram of gradient orientations’ feature summarizes the distribution of measurements within the image regions and is particularly useful for the detection of textured objects with deformable shapes. Thus, it can extract more textural details of the coupler yoke. Compared with the more complicated HOG feature, our histogram of gradient orientations feature is also simpler and faster, and the feature can be calculated more quickly. In our project, we quantized the gradient angle to six orientations. To compute the histogram of gradient orientation features, first, the gradient orientation θ is calculated from the image I of the pixels Display Formula

θ(x,y)=arctanΔyΔx,(3)
where Δy=I(x,y+1)I(x,y1), Δx=I(x+1,y)I(x1,y). Next, by using the calculated gradient orientation θ, gradient orientations at each pixel are discretized into six orientations and we code θ as θ* with different numbers ranging from 1 to 6 Display Formula
θ*={1,θ(π12,π12)2,θ(π12,π4)3,θ(π4,5π12)4,θ(5π12,π2)(π2,5π12)5,θ(5π12,π4)6,θ(π4,π12).(4)
Finally, these discretized gradient orientations are then aggregated into a dense grid of nonoverlapping square image regions, each containing 4×4  pixels. Each of these regions is thus represented by a 6-bin histogram of gradient orientations, and each bin of the histogram represents one orientation. We use {o1,o2,,o6} to denote the six bins of the histogram, respectively, then we get six vectors representing the gradient feature on six orientations.

MDF consist of the above two kinds of features: the gradient magnitude and the histogram of gradient orientations, which performs well in our application. An image of the coupler yoke and its computed MDF feature is shown in Fig. 5.

Graphic Jump Location
Fig. 5
F5 :

MDF features: (a) example image, (b) computed GM features, and (c) histogram of gradient on six orientations.

Fast Multiscale Feature Pyramid

To deal with the change in scale of the coupler yoke in the image, we need to find a way to compute multiscale image features. A feature pyramid is a multiscale representation of an image at every scale, so computing the feature pyramid of our MDF features at every scale of the coupler yoke image is a good method by which to solve this problem. However, the computation of features at every scale of a finely sampled image pyramid is the computational bottleneck of many modern detectors.22 Our MDF also face the same drawback, so we design a fast algorithm to compute multiscale features and get an MDF feature pyramid.

Let I denote an image as before, and let Is=R(I,s) denote I resampled by scale s. We use M=C(I) to denote the MDF features computed on image I, and we want to get the MDF features of Is at scale s. The standard approach is to compute Ms=C(Is), ignoring the information contained in M=C(I). Instead, we propose the following approximation: Display Formula

Ms=R(M,s)·sλ.(5)
Scales are sampled evenly in log-space starting at s=1, typically with four scales per stage (a stage is the interval between one scale and another with half or double its value). The standard approach for constructing a feature pyramid is to compute Ms=C[R(I,s)] for every s, see Fig. 6 (top).

The approximation in Eq. (5) suggests a straightforward method for efficient feature pyramid construction. We begin by computing Ms=C[R(I,s)] at just one scale per stage (s{1,12,14}). At intermediate scales, Ms is computed using Ms=R(Ms,s/s)(s/s)λ where s{1,12,14} is the nearest stage for which we have Ms=C(Is), see Fig. 6 (bottom).

Computing Ms=C[R(I,s)] at one scale per stage provides a good trade-off between speed and accuracy. The cost of evaluating MDF is within 33% of computing the MDF at the original scale and features do not need to be approximated beyond half a stage, which keeps the error low. Alternate schemes, such as interpolating between two nearby scales s for each intermediate scale s or evaluating M more densely, could result in even higher pyramid accuracy (at increased cost). However, the proposed approach proves sufficient for our coupler yoke localization.

Localization of Coupler Yokes

In the location process, we choose the detection window with a size of 136×136  pixels, which fits the scale of the coupler yoke well in the image. Then, we divide the detection window into 4×4  pixels block (each block with 16 pixels) to accurately describe the structural feature of the coupler yoke. MDF contains seven different dimension features: one dimension of the gradient magnitude and six dimensions of the histogram of gradient orientations. So for each block, we can get a seven dimensions’ vector to represent the MDF feature. Thus, a 136×136 detection window contains 34×34=1156 candidate vectors with 136×136×7/16=8092 dimensions’ features. For the 1400×1024 images, computing the features runs at over 75 fps on a modern PC, and further gains could be obtained by code optimizing or using a GPU.23

Another important part of feature extraction is the construction of MDF feature pyramids. Normally, computation of feature pyramids at stage-spaced scale intervals runs at 50 fps on 1400×1024 images. Meanwhile, computing exact feature pyramids with four scales per stage slows to 15 fps, precluding real-time inspection of faults in the coupler yoke. In contrast, our fast pyramid construction with three of four scales per stage approximately runs at nearly 35 fps.

In contrast to MDF features, to get the accurate and reliable localization of the coupler yoke, we also need to effectively train the SVM classifier. To enable the classifier to distinguish the coupler yoke from other components, we establish a training sample database. The training database mainly contains two sample set, the positive set and the negative ones. In the positive training sample set DP_Train, we collect image patches of coupler yokes and set the image size as 136×136  pixels [as shown in Fig. 7(a)]. Simultaneously, we also build the negative training sample set DN_Train, which contains no coupler yokes. We extract image patches with a size of 136×136  pixels from other parts of images and establish the negative set [see Fig. 7(b)]. Using the whole training sample database DTrain=DP_TrainDN_Train, we can train a linear SVM classifier HSVM to locate the coupler yoke.

Graphic Jump Location
Fig. 7
F7 :

Positive and negative images of training database: (a) positive images containing coupler yokes, (b) negative images containing other patches.

The SVM is a relatively well-founded computational learning method based on the statistical learning theory. It has an excellent performance in generalization so it can produce high accuracy in classification for various object detection tasks.5 Given the feature vectors of training samples, an SVM classifier can find the separating hyperplane that has a maximum distance to the closest patterns. By using a kernel trick, a linear SVM maps the input data into a higher dimensional space to perform a nonlinear separation.

In our training, let X={x1,x2,,xN} be the set of MDF feature vectors of the training samples; Y={y1,y2,,yN} represents the corresponding label of each training sample; and N is the number of samples in training database DTrain, namely the total number of positive and negative samples. Then the hyperplane parameter ω˜ of the linear classifier in our method can be determined as Display Formula

ω˜=argminω{12ωTω+Ci=1Nmax(1yiωTxi,0)2}.(6)

In order to reduce the false alarm rate of the SVM classifier, we use a bootstrap method in the training process. First, using the initial training sample database DTrain, we can get the optimal hyperplane parameter ω˜1, which determines an SVM classifier H1. Then we randomly re-extract some new negative 136×136 image regions that do not contain coupler yokes from other parts of images, just like the samples in DN_Train, and apply the classifier H1 to distinguish them. Some of them may be classified as positive samples, so we collect those misclassified samples to generate a new negative set DN2. We add this new set to the old negative set DN_Train, and expand DN_Train to DN_TrainDN2. Based on the new training database, we can obtain a new hyperplane ω˜2 and its corresponding classifier H2. We iterate this process continuously until the false alarm rate reaches a fixed threshold or the iterations exceed the maximum number k. Then we get the last hyperplane parameter ω˜final, and the final classifier HSVM for localization is trained.

Using the localization classifier, we can localize the coupler yoke by exploiting the sliding window search method.24 The sliding window search is the common paradigm employed for object detection and can locate the coupler yoke in our proposed method with high accuracy (see Fig. 8).

Graphic Jump Location
Fig. 8
F8 :

Localization of coupler yokes: (a) localization of normal coupler yoke on the left, (b) localization of normal coupler yoke on the right, (c) localization of left coupler yoke with bolt missing, (d) localization of right coupler yoke with bolt missing.

After we get the image patches of coupler yokes, the next step is to recognize whether the bolts of the coupler yokes are missing. In the localization part, we use MDF features and an SVM classifier to separate coupler yokes from other parts of the images. Our MDF features perform well in the localization of coupler yokes because the structure information of the coupler yoke is different from other parts of the images and MDF can represent this structure feature effectively. However, the structure feature is not a good way to distinguish coupler yoke images with bolts from those with bolts missing faults, so the combination of MDF features and SVM classifier is not a good choice to deal with the recognition task. To solve this problem, we choose Haar features to describe more appearance information of the coupler yoke bolts and Adaboost decision trees as the classifier for the recognition. The recognition process is shown as the recognition part in Fig. 2, which is in the green rectangle of the flowchart.

Haar features are usually an over complete set of two-dimensional Haar functions which can be used to encode the local appearance of objects. They were initially introduced into the field of object detection in Ref. 25 and attracted much attention after being employed for face detection. In Ref. 17, an impressive real-time detection rate was achieved. Haar features have various templates caused by different numbers of rectangles and different orientations of the rectangles. These rectangle features can be computed very rapidly using an intermediate representation for the image that we call the integral image. Using the integral image, values of Haar features can be calculated very quickly, because the sum of pixel values inside an image rectangle can be computed immediately from integral image.

After extracting the features, we implement Adaboost to constraint the amount of Haar features and combine the decision trees as weak classifiers to get a strong one. Generally speaking, using extracted features in only one classifier is not sufficient to provide an accurate pattern recognition. Therefore, the Adaboost algorithm makes up a strong classifier by combining weak classifiers in order to improve the recognition rate.26 To make a strong classifier, the Adaboost training progress is required. After constructing a weak classifier based on each Haar feature, the training samples are reweighted to emphasize samples that are incorrectly classified. Then the next weak classifier is trained with reweighted samples. The predictions from those weak classifiers are combined through weighted voting to produce the prediction of a strong classifier. These weights are determined by the classification error of each weak classifier.

The final Adaboost decision trees classifier can provide a considerably high recognition rate and proves very effective in our project. However, the slow training process is the common drawback of the Adaboost decision trees classifier. Therefore, we propose a fast decision trees training method by prepruning noneffective features with which the final Adaboost decision trees classifier can be trained much faster.

Quick Adaboost Decision Trees Training

In this recognition part, we choose Adaboost decision trees as the final fault classifier. Adaboost is a very popular learning technique in use today, which combines many weak learners to form a strong one. Decision trees are often used as weak learners because of their simple structure and good robustness in common applications. Combining Adaboost with decision trees will make up a powerful classifier and is the inner framework of many advanced methods. This combination is widely used in many domains and performs with satisfactory testing speed and accuracy. However, the fact that the training speed of Adaboost decision trees is relatively slow remains a bottleneck in many practical applications and also influences the training of the final classifier used in our failure inspection system. So, we propose a fast Adaboost decision trees training algorithm to speed up the training speed of the Adaboost decision trees classifier.

An Adaboost classifier with the form H(x)=αtht(x) can be trained by minimizing the loss function L, i.e., by optimizing the scalar αt and weak learner ht(x) in each iteration t. Before training, every data sample xi is assigned a non-negative weight wi. After each iteration, the weights of misclassified samples will be heavier, which increases the severity of misclassifying them in the following iterations.

A decision tree hTREE(x) is composed of a stump hj(x) at every nonleaf node j. Decision trees are always trained using a greedy procedure, recursively setting one stump at a time, starting from the root and expanding to the lower nodes. Each stump produces a binary decision, given an input xRK, then the stump can be parametrized with a polarity p{±1}, a threshold τR, and a feature index k{1,2,,K}Display Formula

hj(x)pjsign(x[kj]τj),(7)
where x[k] is the k-th dimension feature of x.

The goal in each stage of stump training is to find the optimal parameters, which will minimize the weighted classification error ϵDisplay Formula

ϵ1Zwi1{h(xi)yi},Zwi,(8)
where 1{h(xi)yi} is the indicator function.

Given a feature k and an m-subset, the preliminary classification error ϵm(k) is defined as the smallest achievable training error if only the data samples in this m-subset are considered. That is to say, if all other samples not in this m-subset are trimmed, the preliminary classification error ϵm(k) is the whole classification error Display Formula

ϵm(k)1Zm[imxi[k]τm(k)wi1{yi=+pm(k)}+imxi[k]>τm(k)wi1{yi=pm(k)}],(9)
where pm(k) and τm(k) are both optimal preliminary parameters.

We can see that using the best features of an m-subset to predict the optimal feature of the entire dataset is reasonable, but if we only use the samples in the m-subset, the training result may perform badly. So we need to find an approach which can reduce the samples of the m-subset as much as possible, while having no effect on the training result.

According to the properties of the upper bound of the preliminary error, we propose a new decision trees training method based on comparing the feature performance on subsets of the dataset, and consequently, pruning noneffective features:

Fast Decision Trees Training

  1. Use a relatively small m-subset to train each feature and get their own preliminary classification errors ϵm.
  2. Sort the features based on their preliminary errors from best to worst (i.e., the errors from the smallest to largest).
  3. Find the feature with the smallest preliminary error, and train it on the remaining part of the dataset, i.e., train this feature on the entire dataset, then treat the final error as the error upper bound ϵ.
  4. According to the order, choose one feature at a time and compare its error ϵm with the present error bound ϵ to determine whether this feature is noneffective:
    • If it is noneffective, prune this feature immediately.
    • If it is effective, complete its training on the entire dataset. If the final error of this feature is not smaller than the present error bound ϵ, then it is not the optimal feature and should be pruned; if the final error of this feature is smaller than the present error bound ϵ, then it is the best-so-far feature and we use the final error of this feature to replace the present error bound to be the new error upper bound ϵ.
  5. After all the features are trained, the final best-so-far feature is the optimal feature of the decision trees, and the final error upper bound is the classification error of the decision trees classifier.

Confirmation of Missing Bolts of Coupler Yokes

The fault of missing bolts is recognized by the final missing bolts classifier. By using the fast Adaboost decision trees training method above, the final classifier can be trained quickly. The training sample database for the Adaboost decision trees classifier is made up of two main parts: a positive set DP_Adaboost and a negative set DN_Adaboost. The positive set is composed of 88×136  pixel images which have the middle bolt (see Fig. 9), whereas the negative set is generated by images of the same size with the middle bolt missing (see Fig. 10). Using the database DAdaboost=DP_AdaboostDN_Adaboost, we train an Adaboost decision trees classifier HAdaboost. Finally, we extract the bolt region located by the localization part and inspect it by the final inspection classifier HAdaboost.

To summarize, in the whole inspection process, the classifier HSVM is trained and used to localize the coupler yokes and the classifier HAdaboost is trained and exploited to confirm whether the bolts are missing. They are cascaded to accurately accomplish the inspection of faults in coupler yokes.

Experimental Dataset

The images used in the experiments are taken by the TFDS system. The digital cameras used in this system are DALSA HM1400. They are high-speed CCD cameras with a minimum exposure time of 4.7  μs. Their highest frame rate can reach 64 fps, which makes them adaptable to trains with a top speed of 160  km/h. We also choose a fixed focal length lens of 6 mm to match the cameras. As the equipment is installed outdoors, the nature light varies greatly. To reduce the influence of light, we fix some groups of xenon bulbs beside the cameras as compensation light sources (see Fig. 11).

Graphic Jump Location
Fig. 9
F9 :

Positive sample images in DAdaboost: (a), (b), (c), (d), (e) are five positive sample images containing the bolts.

Graphic Jump Location
Fig. 10
F10 :

Negative sample images in DAdaboost: (a), (b), (c), (d), (e) are five negative sample images with the bolts missing.

Graphic Jump Location
Fig. 11
F11 :

Equipment to acquire dynamic coupler yoke images: (a) installed in the center of railway to acquire the bottom images and (b) installed on the side of the tracks to acquire the profile images.

Experiments on a larger number of images are conducted to validate our proposed method. The training set DTrain contains a total of 1436 original images collected from 12 freight trains, which forms 1436 positive images and 1400 negative images to train the localization classifier, and 1025 positive images and 411 negative images to train the recognition classifier. The test set Dtest is composed of 5124 original images from 26 freight trains, which contains 4636 fault-free images and 488 fault images, which will be used to test the accuracy of proposed system. The configuration of the computer used in our experiments is 3.6 GHz Intel Xeon E5-1620 processor (four core and eight threads), 8GB RAM, and Win7.

Inspection Results

The inspection system contains two parts, the localization part and the inspection part, and each part’s performance will affect the final inspection accuracy. In this section, we will analyze the performance of both the localization and the whole system.

Evaluation of the localization method

The first part of our system is to locate the position of the coupler yoke, which is actually an object detection problem. Considering the intrinsic features of a coupler yoke and the various environments, we propose the MDF features and a fast way to construct a feature pyramid to detect the target. However, there are also many common detection algorithms which can address this problem. To validate the performance of our algorithm, we compare our proposed localization method with three representative algorithms.

Matching methods using correlation have always been a popular way to detect objects. Various kinds of correlation have been proposed, such as normalized cross-correlation (NCC), phase correlation (PC), and orientation correlation (OC), from which we choose a new improved local correlation method27 to compare with our method. Edge information is an important aspect to describe objects, and edge-based techniques have been developed to improve their detection performance. Yang et al.28 proposed a novel edge-based method that can detect objects in cluttered images, and we choose it as a representative comparison. We also compare our method with the widely used local binary pattern (LBP) algorithm.29

We use a receiver operating characteristic (ROC) curve to represent the localization accuracy of our proposed method and the three compared ones. In the ROC curve, the vertical axis denotes the localization rate of the classifier, and the horizontal axis indicates the number of false localizations. For an ideal classifier, the ROC curve would pass through the top left corner (0, 1), which means the classifier reaches a 100% localization rate without any faults. In the test, experimental results of the four methods are shown as four ROC curves in Fig. 12.

Graphic Jump Location
Fig. 12
F12 :

ROC curves of the four localization methods.

We can find that our classifier reaches the 100% localization rate when the number of false localizations is about 23. This indicates that in the whole test set there are only 23 images where the localization classifier fails to accurately locate the coupler yokes, and the false numbers of the other three methods are 27, 39, and 48, respectively. Thus, our MDF features interacting with an SVM classifier reach the best localization performance compared with the other methods and can also justify real requests.

Evaluation of the whole inspection system

After testing the localization classifier, we should examine the final inspection capability of the entire system. The inspection performance is mainly represented by two indicators, the inspection rate and the false alarm rate, which are defined as follows. Many other faults inspection methods have been proposed and in this section, we will compare our inspection system with two recently proposed ones.12,13 MLP is commonly used in classification problems and employs hyperplanes to divide the pattern space into various classes,30 so we choose it as a comparison. Table 1 shows the final inspection results of our proposed system and three others Display Formula

inspection rate=number of correct inspectionsnumber of fault images,(10)
Display Formula
false alarm rate=number of false alarmsnumber of fault-free images.(11)

Table Grahic Jump Location
Table 1Inspection results of the four inspection methods.

From the table, we can see that for all the 5124 images in the test set, our system has a 98.6% inspection rate and only a 4.1% false alarm rate, and the inspection rates of the other three ones are all below 96%. As our inspection system is aimed to deal with running trains online, the real-time property is also an important aspect. The time for processing one image is about 98 ms for our system, which is much less than the times for the other proposed systems. The detailed computing time of each module in our system is listed in Table 2.

Table Grahic Jump Location
Table 2Average processing time of each module in our method.

Experiments above demonstrate our method has an outstanding inspection performance and reliable inspection speed. Our system shows a capability exceeding most of the existing systems and meets the needs for practical operation, so it has already been used in many railway stations in China.

In this paper, we propose a vision-based system which automatically inspects the faults of running freight trains online. Aimed to recognize the missing bolt of a coupler yoke, a hierarchical and real-time fault inspection method has been proposed. The proposed inspection approach consists of two main procedures, coupler yoke localization and bolt fault recognition. To localize coupler yokes effectively, we present a feature called MDF and design a fast algorithm to compute multiscale image features. The fault recognition procedure is based on further analyzing the coupler yoke. We propose a fast decision trees training method to train an Adaboost decision trees classifier to quickly inspect faulty images. The results from testing a large number of image samples show that our method reaches a 98.6% inspection rate and only a 4.1% false alarm rate, which means our method can guarantee a high inspection rate with few false alarms.

Many current fault inspection systems used in the railway transportation field require manual operation to some extent, which are semiautomatic in fact.31 But the fault inspection system proposed in this paper is completely automatic, and has been installed in many railway stations in China, such as Wuhan, Guilin, Chengdu, and Beijing. Although our system shows satisfactory performances, it may still be improved by using various sensor equipment and multidimension information fusion techniques.

This work was supported by the National Key Scientific Instruments and Equipment Development Program (No. 2012YQ140032), the Natural Science Foundation of Beijing (No. 3142012), and the Instrument Special National Nature Science Foundation of China (No. 61127009).

Zhu  Z. X.  et al., “A fast potential fault regions locating method used in inspecting freight cars,” J. Comput.. 9, (5 ), 1266 –1273 (2014). 0731-9258 CrossRef
Zhang  K.  et al., “Algorithm of railway turnout fault detection based on PNN neural network,” in  Proc. 7th Int. Symp. on Computational Intelligence and Design, Hangzhou , pp. 544 –547,  China  (2014)
Rathod  V. R.  et al., “Comparative analysis of NDE techniques with image processing,” J. Nondestr. Eval.. 27, (4 ), 1 –22 (2012)CrossRef
Zhang  H. J.  et al., “Fault detection based on multi-scale local binary patterns operator and improved teaching-learning-based optimization algorithm,” Symmetry. 7, (4 ), 1734 –1750 (2015) 2073-8994 CrossRef
Widodo  A.  et al., “Support vector machine in machine condition monitoring and fault diagnosis,” Mech. Syst. Signal Process.. 21, (6 ), 2560 –2574 (2007).CrossRef
Owusu  E.  et al., “A neural-AdaBoost based facial expression recognition system,” Expert Syst. Appl.. 41, (7 ), 3383 –3390 (2014)CrossRef
Anderson  G. B., “Acoustic detection of distressed freight car roller bearings,” in  Proc. 2007 JRCICE Spring technical Conf. , pp. 91 –103,  Pueblo  (2007).
Marino  F.  et al., “A real-time visual inspection system for railway maintenance: automatic hexagonal-headed bolts detection,” IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev.. 37, (3 ), 418 –428 (2007).CrossRef
De Ruvo  P.  et al., “A GPU-based vision system for real time detection of fastening elements in railway inspection,” in  Proc. 16th IEEE Conf. on Image Processing , pp. 2333 –2336,  Cairo, Egypt  (2009).CrossRef
Hart  J. M.  et al., “Machine vision using multi-spectral imaging for undercarriage inspection of railroad equipment,” in  Proc. 8th World Congress on Railway Research , pp. 1 –8,  Seoul, Korea  (2008).
Kim  H. C.  et al., “Automated inspection system for rolling stock brake shoes,” IEEE Trans. Instrum. Meas.. 60, (8 ), 2835 –2847 (2011).CrossRef
Zhou  F. Q.  et al., “Automated visual inspection of angle cocks during train operation,” Proc. of the Inst. Mech. Eng. Part F J. Rail Rapid Transit. 228, (7 ), 794 –806 (2014).CrossRef
Li  N.  et al., “Automatic fault recognition for losing of train bogie center plate bolt,” in  Proc. IEEE 14th Int. Conf. on Communication Technology , pp. 1001 –1005,  Chengdu, China  (2012).CrossRef
Liu  Z. H.  et al., “Displacement fault detection of bearing weight saddle in TFDS based on hough transform and symmetry validation,” in  Proc. 9th Int. Conf. on FSKD , pp. 1404 –1408,  Chongqing, China  (2012).
Zhu  Z. X.  et al., “Fast and robust 2D-shape extraction using discrete-point sampling and centerline grouping in complex images,” IEEE Trans. Image Process.. 22, (12 ), 4762 –4774 (2013).CrossRef
Jones  M.  et al., “Fast multi-view face detection,” in  Proc. IEEE Conf. on Computer Vision and Pattern Recognition , pp. 276 –286,  Wisconsin  (2003).
Viola  P.  et al., “Robust real-time object detection,” Int. J. Comput. Vision. 57, (2 ), 137 –154 (2004).CrossRef
Dalal  N.  et al., “Histograms of oriented gradients for human detection,” in  Proc. IEEE Conf. on Computer Vision and Pattern Recognition , pp. 886 –893,  San Diego  (2005).CrossRef
Felzenszwalb  P.  et al., “Object detection with discriminatively trained part based models,” IEEE Trans. Pattern Anal. Mach. Intell.. 32, (9 ), 1627 –1645 (2010).CrossRef
Krizhevsky  A.  et al., “ImageNet classification with deep convolutional neural networks,” in  Proc. Neural Information Processing Systems , pp. 1 –9,  Nevada  (2012).
Li  X.  et al., “A survey of appearance models in visual object tracking,” ACM Trans. Intell. Syst. Technol.. 4, (4 ), 478 –448 (2013).CrossRef
Burt  P.  et al., “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun.. 31, (4 ), 532 –540 (1983).CrossRef
Benenson  R.  et al., “Pedestrian detection at 100 frames per second,” in  Proc. IEEE Conf. on Computer Vision and Pattern Recognition , pp. 2903 –2910,  Providence  (2012).CrossRef
Gualdi  G.  et al., “Multistage particle windows for fast and accurate object detection,” IEEE Trans. Pattern Anal. Mach. Intell.. 34, (8 ), 1589 –1604 (2012).CrossRef
Papageorgiou  C. P.  et al., “A general framework for object detection,” in  Proc. IEEE Conf. on Computer Vision , pp. 555 –562,  Bombay, India  (1998).CrossRef
Freund  Y., “Boosting a weak learning algorithm by majority,” Inf. Comput.. 121, (2 ), 256 –285 (1995). 0890-5401 CrossRef
Li  J.  et al., “Improved local correlation method for fingerprint matching,” in  Proc. Int. Symp. on Computing and Networking , pp. 560 –562,  Shizuoka, Japan  (2014).
Yang  X.W.  et al., “Contour-based object detection as dominant set computation,” Pattern Recognit.. 45, (5 ), 1927 –1936 (2012). 0031-3203 CrossRef
Ahonen  T.  et al., “Face description with local binary patterns: application to face recognition,” IEEE Trans. Pattern Anal. Mach. Intell.. 28, (12 ), 2037 –2041 (2006).CrossRef
Nebti  S.  et al., “Handwritten characters recognition based on nature-inspired computing and neuro-evolution,” Appl. Intell.. 38, (2 ), 146 –159 (2013).CrossRef
Yella  S.  et al., “Condition monitoring of wooden railway sleepers,” Transport Res. Part C: Emerg. Technol.. 17, (1 ), 38 –55 (2009).CrossRef

Chao Zheng received his BS degree in automation from Beijing Technology and Business University, Beijing, China, in 2012. He is currently working toward his PhD in computer science at the School of Instrumentation Science and Opto-electronics Engineering, Beihang University. His research interests include computer vision, fault inspection, visual tracking, object detection, and pattern recognition.

Zhenzhong Wei received his PhD from the School of Automation Science and Electrical Engineering, Beihang University, Beijing, China, in 2003. He is currently a professor in the School of Instrumentation Science and Opto-electronics Engineering, Beihang University. His research interests include machine vision and artificial intelligence.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.

Citation

Chao Zheng and Zhenzhong Wei
"Automatic online vision-based inspection system of coupler yoke for freight trains", J. Electron. Imaging. 25(6), 061602 (Jul 18, 2016). ; http://dx.doi.org/10.1117/1.JEI.25.6.061602


Figures

Graphic Jump Location
Fig. 1
F1 :

An incident of the separation in a train body caused by a fault in the coupler yoke.

Graphic Jump Location
Fig. 2
F2 :

Block diagram illustrating the functional parts in the proposed inspection system.

Graphic Jump Location
Fig. 3
F3 :

The sketch of the TFDS system.

Graphic Jump Location
Fig. 4
F4 :

The coupler yoke images acquired by the TFDS system: (a) sample image the coupler yoke is installed on the left, (b) sample image the coupler yoke is installed on the right.

Graphic Jump Location
Fig. 5
F5 :

MDF features: (a) example image, (b) computed GM features, and (c) histogram of gradient on six orientations.

Graphic Jump Location
Fig. 7
F7 :

Positive and negative images of training database: (a) positive images containing coupler yokes, (b) negative images containing other patches.

Graphic Jump Location
Fig. 8
F8 :

Localization of coupler yokes: (a) localization of normal coupler yoke on the left, (b) localization of normal coupler yoke on the right, (c) localization of left coupler yoke with bolt missing, (d) localization of right coupler yoke with bolt missing.

Graphic Jump Location
Fig. 9
F9 :

Positive sample images in DAdaboost: (a), (b), (c), (d), (e) are five positive sample images containing the bolts.

Graphic Jump Location
Fig. 10
F10 :

Negative sample images in DAdaboost: (a), (b), (c), (d), (e) are five negative sample images with the bolts missing.

Graphic Jump Location
Fig. 11
F11 :

Equipment to acquire dynamic coupler yoke images: (a) installed in the center of railway to acquire the bottom images and (b) installed on the side of the tracks to acquire the profile images.

Graphic Jump Location
Fig. 12
F12 :

ROC curves of the four localization methods.

Tables

Table Grahic Jump Location
Table 1Inspection results of the four inspection methods.
Table Grahic Jump Location
Table 2Average processing time of each module in our method.

References

Zhu  Z. X.  et al., “A fast potential fault regions locating method used in inspecting freight cars,” J. Comput.. 9, (5 ), 1266 –1273 (2014). 0731-9258 CrossRef
Zhang  K.  et al., “Algorithm of railway turnout fault detection based on PNN neural network,” in  Proc. 7th Int. Symp. on Computational Intelligence and Design, Hangzhou , pp. 544 –547,  China  (2014)
Rathod  V. R.  et al., “Comparative analysis of NDE techniques with image processing,” J. Nondestr. Eval.. 27, (4 ), 1 –22 (2012)CrossRef
Zhang  H. J.  et al., “Fault detection based on multi-scale local binary patterns operator and improved teaching-learning-based optimization algorithm,” Symmetry. 7, (4 ), 1734 –1750 (2015) 2073-8994 CrossRef
Widodo  A.  et al., “Support vector machine in machine condition monitoring and fault diagnosis,” Mech. Syst. Signal Process.. 21, (6 ), 2560 –2574 (2007).CrossRef
Owusu  E.  et al., “A neural-AdaBoost based facial expression recognition system,” Expert Syst. Appl.. 41, (7 ), 3383 –3390 (2014)CrossRef
Anderson  G. B., “Acoustic detection of distressed freight car roller bearings,” in  Proc. 2007 JRCICE Spring technical Conf. , pp. 91 –103,  Pueblo  (2007).
Marino  F.  et al., “A real-time visual inspection system for railway maintenance: automatic hexagonal-headed bolts detection,” IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev.. 37, (3 ), 418 –428 (2007).CrossRef
De Ruvo  P.  et al., “A GPU-based vision system for real time detection of fastening elements in railway inspection,” in  Proc. 16th IEEE Conf. on Image Processing , pp. 2333 –2336,  Cairo, Egypt  (2009).CrossRef
Hart  J. M.  et al., “Machine vision using multi-spectral imaging for undercarriage inspection of railroad equipment,” in  Proc. 8th World Congress on Railway Research , pp. 1 –8,  Seoul, Korea  (2008).
Kim  H. C.  et al., “Automated inspection system for rolling stock brake shoes,” IEEE Trans. Instrum. Meas.. 60, (8 ), 2835 –2847 (2011).CrossRef
Zhou  F. Q.  et al., “Automated visual inspection of angle cocks during train operation,” Proc. of the Inst. Mech. Eng. Part F J. Rail Rapid Transit. 228, (7 ), 794 –806 (2014).CrossRef
Li  N.  et al., “Automatic fault recognition for losing of train bogie center plate bolt,” in  Proc. IEEE 14th Int. Conf. on Communication Technology , pp. 1001 –1005,  Chengdu, China  (2012).CrossRef
Liu  Z. H.  et al., “Displacement fault detection of bearing weight saddle in TFDS based on hough transform and symmetry validation,” in  Proc. 9th Int. Conf. on FSKD , pp. 1404 –1408,  Chongqing, China  (2012).
Zhu  Z. X.  et al., “Fast and robust 2D-shape extraction using discrete-point sampling and centerline grouping in complex images,” IEEE Trans. Image Process.. 22, (12 ), 4762 –4774 (2013).CrossRef
Jones  M.  et al., “Fast multi-view face detection,” in  Proc. IEEE Conf. on Computer Vision and Pattern Recognition , pp. 276 –286,  Wisconsin  (2003).
Viola  P.  et al., “Robust real-time object detection,” Int. J. Comput. Vision. 57, (2 ), 137 –154 (2004).CrossRef
Dalal  N.  et al., “Histograms of oriented gradients for human detection,” in  Proc. IEEE Conf. on Computer Vision and Pattern Recognition , pp. 886 –893,  San Diego  (2005).CrossRef
Felzenszwalb  P.  et al., “Object detection with discriminatively trained part based models,” IEEE Trans. Pattern Anal. Mach. Intell.. 32, (9 ), 1627 –1645 (2010).CrossRef
Krizhevsky  A.  et al., “ImageNet classification with deep convolutional neural networks,” in  Proc. Neural Information Processing Systems , pp. 1 –9,  Nevada  (2012).
Li  X.  et al., “A survey of appearance models in visual object tracking,” ACM Trans. Intell. Syst. Technol.. 4, (4 ), 478 –448 (2013).CrossRef
Burt  P.  et al., “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun.. 31, (4 ), 532 –540 (1983).CrossRef
Benenson  R.  et al., “Pedestrian detection at 100 frames per second,” in  Proc. IEEE Conf. on Computer Vision and Pattern Recognition , pp. 2903 –2910,  Providence  (2012).CrossRef
Gualdi  G.  et al., “Multistage particle windows for fast and accurate object detection,” IEEE Trans. Pattern Anal. Mach. Intell.. 34, (8 ), 1589 –1604 (2012).CrossRef
Papageorgiou  C. P.  et al., “A general framework for object detection,” in  Proc. IEEE Conf. on Computer Vision , pp. 555 –562,  Bombay, India  (1998).CrossRef
Freund  Y., “Boosting a weak learning algorithm by majority,” Inf. Comput.. 121, (2 ), 256 –285 (1995). 0890-5401 CrossRef
Li  J.  et al., “Improved local correlation method for fingerprint matching,” in  Proc. Int. Symp. on Computing and Networking , pp. 560 –562,  Shizuoka, Japan  (2014).
Yang  X.W.  et al., “Contour-based object detection as dominant set computation,” Pattern Recognit.. 45, (5 ), 1927 –1936 (2012). 0031-3203 CrossRef
Ahonen  T.  et al., “Face description with local binary patterns: application to face recognition,” IEEE Trans. Pattern Anal. Mach. Intell.. 28, (12 ), 2037 –2041 (2006).CrossRef
Nebti  S.  et al., “Handwritten characters recognition based on nature-inspired computing and neuro-evolution,” Appl. Intell.. 38, (2 ), 146 –159 (2013).CrossRef
Yella  S.  et al., “Condition monitoring of wooden railway sleepers,” Transport Res. Part C: Emerg. Technol.. 17, (1 ), 38 –55 (2009).CrossRef

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.