PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11432, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote Sensing Image Processing and Geographic Information Systems
In hyperspectral image classification, small number of labeled samples versus high dimensional data is one of major challenges. Semi-supervised learning has shown potential to relieve the dilemma. Compared with its supervised learning counterpart, semi-supervised learning exploits both intrinsic structure of labeled and unlabeled samples. In this work, we proposed a graph-fusion based semi-supervised learning method for hyperspectral image classification. More specially, two graphs are constructed from spectral-spatial Gabor features and original spectral signatures, respectively, and then are integrated using an affine combination. Experimental results from an AVIRIS hyperspectral dataset verify the excellent classification performance of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In hyperspectral image classification, small number of labeled samples versus high dimensional data is one of major challenges. Semi-supervised learning has shown potential to relief the dilemma. Compared with its supervised learning counterpart, semi-supervised learning exploits both intrinsic structure of labeled and unlabeled samples. In this work, we proposed a graph fusion based semi-supervised learning method for hyperspectral image classification. More specially, two graphs are constructed from spectral-spatial Gabor features and original spectral signatures, respectively, and then are integrated using an affine combination. Experimental results from an AVIRIS hyperspectral dataset verify the excellent classification performance of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shiyang River Basin, located in the north piedmont of Qilian Mountain and the east of Hexi region in Gansu province, is one of the three major in land rivers in Hexi region, the fall of the elevation in the basin is large, the ecological environment is complex, the eco-system type is complete, the land cover type is various and the regional difference is distinct, is one of the sensitive areas for climate change. This research is based on Landsat MSS (1980), Landsat-8OLI(2015), filed study and vegetation chart and other multi-source data, using 3S technology to compile the data of land cover in Shiyang River Basin with high accuracy and analyze the changing features of land cover from 1980-2015 in the basin. We found the land utilization type distribution of Shiyang River Basin, forest mainly distributed in Qilian mountain areas where the altitude is relatively high, one part of the grassland which is the natural grassland is distributed in Qilian mountain area, the other part of the grassland, the artificial grassland is in the middle and lower area of the basin, which is cross distributed with the cropland. The vertical distribution of major cover type in the basin possesses distinct feature of zonation. Land cover goes from high altitude to low altitude, which is the distribution of the accumulation of snow, glacier, forest, grassland, built-up land, cropland, water body bare land or sand in sequence. These results provide a scientific basis for the study of land use and cover change in a critical region and will inform ecosystem protection, sustainability and management in area of Qilian Mountains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sandstorm was a common natural phenomenon formed by special geographical environment and meteorological conditions. The geostationary meteorological satellite, attracted with the advantages of wide monitoring range and high frequency of observation, was becoming the most effective methods in monitoring, tracking and analysis on the process of sandstorms research. Based on the geostationary meteorological satellite data of Himawari-8 at 4:00 on May 3, 2017, compared the remote sensing inversion results of dust intensity by using a variety of mature sand-dust intensity inversion models. The result shows that Index of comparable sandstorm intensity model was the best to invert the area and range of sand-dust intensity, Sandstorms intensity index model was followed, and Mid-infrared channel difference model was the worst. The inversion results of the four types of sand dust intensity are quite different.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, with the frequencies and intensities was increasing in environmental issues, much more attentions were focus on sandstorm in the fields of natural science and social science research. The geostationary satellite imagery can continuously observe the surface of the earth in a short period of time, and has a good monitoring advantage for the sand- dust, which has a fast-moving target. Based on the geostationary meteorological satellite data of Himawari-8(H8) at 4:00 on May 3, 2017, the results of remote sensing retrieval of sand-dust intensity are compared by using a variety of exist sand- dust identification models. The results show that the multi-channel threshold method has the best effect on sand-dust identification, the reflected radiation dust index method is the second, and the infrared split window channel difference method and the infrared split window channel ratio method have the worst identification effect. Single channel threshold method, infrared multi-channel threshold method, infrared split window channel difference method and infrared split window channel ratio method have poor distinction between cloud layer and sand dust, multi-channel threshold method and reflected radiation dust index method are poor distinguished between low temperature zone and the sand-dust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the fundamental data about the terrains, DEM plays an important role in many fields. The high resolution DEM is increasingly popular. Yet, the multiscale resolution DEMs are still desired for some applications due to the fact that the low resolution DEM can reduce the memory demands with limited computational complexity. Then, how to obtain the multiscale DEMs remains an open question, which demands that the different resolution DEMs should discard the detailed information with maintaining the main information of the high resolution DEM. Moreover, the multiscale DEMs should not cost many memories. Generally, there is a contradiction. As such, this paper proposes a multiscale DEM generation method based on Singular Value Decomposition (SVD) which can establish multiscale DEMs maintaining the different details with a small quantity of memory increasement. The method is simple but effective. Lots of experiment shows its effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of ship targets in remote sensing satellite images is an important means to obtain all ships on the sea surface by satellite image. It can realize the monitoring of sea surface resources, so it has important civil and military significance. Because of the complex background, ship detection in harbour is one of the difficulties. In recent years, many target detection methods based on deep learning have been proposed, and they have achieved good results in natural scene images. YOLOv3 is an advanced end-to-end method because of its high detection accuracy and fast detection speed. But even advanced methods have their shortcomings in this task. Ships in port usually dock side by side, which leads to missed detection of many targets when NMS (Non-Maximum Suppression) operation is performed on the predicted bounding boxes. In this paper, we replace the original NMS with Soft-NMS on the basis of YOLOv3. This operation makes the detector miss fewer targets. At the same time, we added IoU loss when calculating the loss of the prediction box and ground truth box. IoU loss takes the prediction box and the IoU value of its corresponding ground truth box as the evaluation criterion, which makes the target box generated by the detector more fitted to the target. In order to validate the effectiveness of the proposed algorithm, we use harbour remote sensing data collected from Google image and GaoFen-2 (GF-2) satellite, the experimental results show good performance of the proposed method in the detection of ship targets in harbour.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The contrast and color fidelity of aerial images are usually seriously weakened and many features are covered because of atmospheric scattering and other factors. By using global features, low contrast images can be improved globally and the enhanced images may have little noise and ringing artifacts, but overexposure or underexposure may occur on some parts. By using local features, the details appear better, but it may lead to noise and ringing artifacts when the contrast gain is too large. In the paper, a new contrast enhancement method with adaptive gamma correction based on the weighting of global and local gray-scale mean is proposed. The adaptive gamma parameter which is obtained by incorporating the global and local gray-scale mean into the weighting distribution, is used to correct the gray value of each pixel in the image. Aerial images taken by DJI unmanned aerial vehicle with Inspire 2 at an altitude of 500 meters have been processed in the proposed method. Experimental results indicated that the proposed algorithm performs even better than the current mainstream methods in contrast enhancement for low visibility aerial images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing image classification has important research significance and application value in image information extraction, ground object detection and identification, and is widely used in military reconnaissance, disaster relief, crop recognition and yield estimation and other military and civil fields. In the past few decades, scholars have done a lot of research on remote sensing image classification, and put forward multiple classification methods, which are mainly divided into supervised classification and unsupervised classification. However, with the increasement of remote sensing image resolution, traditional classification algorithms can not meet the needs for high-precision classification, and also unable to solve “the different objects with same spectrum” and “the same object with different spectrum” problem. In recent years, machine learning has made breakthroughs in image classification research. As a branch of machine learning, deep learning stands out among many machine algorithms for its applicability of learning models and accuracy of classification results. Therefore, more and more scholars apply deep learning to remote sensing image classification. In this paper, the application of deep learning in remote sensing image classification is analyzed and prospected. Firstly, the basic process of classification is summarized, and the common data sets are introduced. Secondly, frequently-used models and open source tools in application has been introduced, with the analysis of the latest application progress in rapidly developing deep learning methods. Finally, the difficulties and challenges existing in the application is discussed and the trend is prospected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Regional crop production prediction is a significant component of national food security assessment. Crop growth models are successfully applicable for yield estimation in simple point scale, however, they are hampered by the deriving of regional crop key input parameters. The World Food Studies (WOFOST) model had been used to express the characteristic of time series LAI in crop growth season in the study area. To solve the system errors of coarse resolution data extracted LAI due to the mixed pixels effect, the corrected LAI was implemented by combining the field measured LAI data and the HJ-LAI temporal trend information. Time-series LAI was assimilated through combined corrected HJ-LAI and WOFOST simulated LAI during the whole growth stage with the ensemble Kalman filter (EnKF) algorithm. The assimilated optimal LAI was used to drive the WOFOST model per-pixel to estimate the regional yield. Scheduling the assimilation of different step length observed quantities, comparing the accuracy and the efficiency of the assimilation at different time scale, we selected the proper time scale of the assimilation. The results indicated that selecting the time scale of the step length between 10 days and 16 days was more appropriate. Compared with the statistical yield, the coefficient of determination was 0.66 and RMSE was 1.61 ton/hm. The results showed that assimilation of the remotely sensed data into crop growth model with EnKF can provide a reliable approach for estimate regional crop yield and had great potential in agricultural applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pets healthcare data would be stored in scattered manner due to the changes in the pet owner, service agency or other reasons, which result in a large number of repeated examinations in healthcare service process and even medical negligence. Therefore, we proposed a blockchain-based pet healthcare data sharing approach so that the pet's healthcare history information can be presented in a complete manner when needed and prevent data from being attacked intensively, moreover, data integrity and accountability in healthcare service processes can be fulfilled. The ciphertext-policy attribute-based encryption(CP-ABE) is used to safeguard the privacy of the pet owner and the confidentiality of the healthcare data of the pets stored in different healthcare service parties. The smart contract is applied to ensure the data interaction between users and the normal acquisition of data after the change of the pet owner. In addition, we provide security analysis and performance evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Attention can be interpreted as a method which allocates available computing power to the most informative part of the signal. In deep learning, attention mechanism also helps us to dig out the subtle information. In hyperspectral classification, the discrimination of some land cover types depends on the fine differences of hyperspectral, but most classification methods do not focus on the fine differences between hyperspectral categories. In this paper, a hierarchical group attention classification method is proposed to focus on the differences of categories from coarse to fine, therefore, the fine differences between categories can be obtained to achieve more accurate classification. For comparison and validation, we test the proposed approach with three other classification approaches on Salinas and Indian datasets, and the experiments demonstrate that our proposed approach can distinguish the spectral subtle differences of similar categories more accurately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep-learning (DL) based classification methods have been successfully used for hyperspectral image classification in recent years. Among various DL-based methods, convolutional neural network (CNN) has attracted a lot of attention. However, limited number of samples restricts the DL-based methods for widespread application. To deal with this problem, we propose a classification framework that can be transfer-learned between hyperspectral data with different number of bands. First, band selection is conducted to retain same number of bands for imagery of different hyperspectral sensors. Second, we simplify the typical 1D-CNN architecture by removing max-pooling layers. Third, modified CNN is trained on a source data, and this pretrained CNN is then fine-tuned with the target data. In the experiment, we pretrain the proposed network using the Indian Pines scene, and then fine-tune parameters to classify pixels in the Botswana scene. According to classification results, this proposed method obtains the highest overall accuracy, compared to KNN, SVM and its corresponding original 1D-CNN model, and even spend the least time training. Therefore, it can be concluded that this proposed method indicate transfer learning can be used between different hyperspectral images, and be helpful to improve the classification efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a method of hyperspectral image dimensionality reduction based on automatic subspace partition, k-means clustering based on mutual information and adaptive band selection. This method first automatic subspace division method is used to determine the initial subspace, in various initial subspace through the mutual information between image variance and band and K - means to determine the clustering center and clustering center from two adjacent band selection and their mutual information between the difference between the absolute minimum band as a boundary to delimit the molecular space, and then in the subspace of division of each band is obtained by applying the method of adaptive band selection index, get the biggest index of each subspace of band and from big to small order according to the index, at last in the first three band is the selection of bands. OMIS hyperspectral data were used to conduct experiments, and this method has a higher classification accuracy than the previous band selection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Whether the surface rupture of the south segment of Minjiang fault is present remains a controversial issue in recent years. In previous work, interpretation of remote sensing images from Google suggests that this fault section exposes on slopes of the eastern bank of the Mingjiang River, expressing as surface ruptures, implying its activity during Holocene. While the features of fault scarps seen in the field challenges the existence of these ruptures. By virtue of exhaustive field investigations, this paper attempts to further address this issue. Our analysis of geology and geomorphology suggests that the topographic characteristics from remote sensing data are not traces of surface ruptures, instead resulted from a big landslide at the river. Thus it reminds us that there may be a great uncertainty when using remote sensing images interpretation to infer surface ruptures associated with faults.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
[Objective] Based on the PIE SDK, Landsat-8 was used as the data source to realize the plug-in corn planting area extraction tool, which provided technical support for the rapid and objective acquisition of county corn planting area information, and assisted agricultural remote sensing application and agricultural development. [Methods] The PIE SDK was used for plug-in secondary development to realize the radiometric calibration, fusion and cropping functions of remote sensing images. The vegetation distribution of the experimental area was obtained by normalized vegetation index (NDVI), and the K-means classification method was used to realize Corn planting area extraction. [Results] Choosing Weishi County as experimental area,according to the Landsat-8/OLI data of September 4, 2014, the corn planting area was 29,800 hectares, accounting for 23.5% of the total area of the experimental area, mainly distributed in the central and eastern parts of the experimental area. (1) The corn planting area extracted by the development plug-in was 29,800 hectares. In 2014, the corn planting area in Weishi County was about 27,800 hectares. The total area error of the two compared with the experimental area was 2.25%. The high quality provides an effective tool for the survey of corn planting area in Weishi County. (2) The distribution map of corn plantation in Weishi County was obtained by processing the classification results, which is basically consistent with the distribution of corn planting in the county over the years. The method of extracting corn planting area in Weishi County by using NDVI and K-means method It is feasible, and the experimental results show that the maximum number of iterations is set to 30. This method can provide reference for the threshold setting of corn planting area information in the county. [Conclusions] Based on the PIE SDK for secondary development, to achieve the extraction of corn planting area in the county. The results basically meet the statistical data quality requirements, and also provide a reference for the plug-in development of similar crop extraction tools based on the PIE SDK, and can provide objective data to support fast and accurate information on the cultivation statistics, subsidies and insurance business as corn.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sea-land segmentation is one of important research domains in the remote sensing image processing. Edge aware of sealand segmentation is one of hot-points. Edge information is used as an auxiliary learning to provide more information for the segmentation. In this paper, we propose a novel model for the sea-land segmentation with an edge detection in the lower layers and segmentation in higher layers, which is proved as an effective way to fuse the different tasks. We exploit pre-trained VGG16 model to initial the backbone. We use F-score to assess the segment output. Land accuracy is 0.9929 of F-score and sea accuracy score is 0.9937 of F-score in our own test dataset in the sea-land segmentation, which is the highest score among the five methods we take in the comparisons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a deep learning approach to semantic segmentation of very high resolution remote sensing images. We introduce RLFCN, a fully convolutional architecture based on residual logic blocks, to model the ambiguous mapping between remote sensing images and classification maps. In order to recover the output resolution to the original size, we adopt a special way to efficiently learn feature map up-sampling within the network. For optimization, we employ the equally-weighted focal loss which is particularly suitable for the task for it reduces the impact of class imbalance. Our framework consists of only one single architecture which is trained end-to-end and doesn't rely on any post-processing techniques and needs no extra data except images. Based on our framework, we conducted experiments on a ISPRS dataset: Vaihingen. The results indicate that our framework achieves better performance than the current state of the art, while containing fewer parameters and requires fewer training data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object detection is a fundamental yet challenging problem in natural scenes and aerial scenes. Although region based deep convolutional neural networks (CNNs) have brought impressive improvements for object detection in natural scenes, detecting oriented objects in aerial images still remains challenging, due to the complexity of the aerial image backgrounds and the large degree of freedom in scale, orientation, and density. To tackle these problems, we propose a novel network, composed of backbone structure with global attention module, multi-scale object proposal network and final oriented object detector, which can efficiently detect small objects, arbitrary direction objects, and dense objects in aerial images. We utilize pyramid pooling blocks as a global attention module on the top of the backbone structure to generate discriminative feature representations, which provide diverse context information and complementary receptive field for the detector. The global attention module can help the model reduce false alarms and incorrect classifications in the complex aerial image backgrounds. The multi-scale object proposal network aims to generate object-like regions at different scales through several intermediate layers. After that, these regions are sent to the detector for refined classification and regression, which can alleviate the problem of variant scales in aerial images. The oriented object detector is designed to generate predictions for inclined box. The quantitative comparison results on the challenging DOTA dataset show that our proposed method is more accurate than baseline algorithms and is effective for objection detection in aerial images. The results demonstrate that the proposed method significantly improves the performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of remote sensing technology, we can obtain more and more target information from remote sensing images. Among them, the 6D pose contains the position and attitude of the target relative to the camera in the three-dimensional coordinate system. The traditional 6d pose algorithm for predicting targets is calculated by predicting the target RoI or inclined box. However, the detection standard IoU of the traditional method cannot reflect the direction information of the target, and there is ambiguity of the inclination of the target inclined box, such as 0°and 180°, 0° and 360°. In this paper, we present a new algorithm for predicting the target's 6D pose in remote sensing images, Anchor Points Prediction (APP). Different from the previous methods, the target results of the final output can get the direction information. Different from the traditional method, we predict the target's multiple feature points based on the neural network to obtain the homograph between the object plane and the ground. The resulting 6d pose can accurately describe the three-dimensional position and attitude of the target. We tested our algorithm on the HRSC2016 dataset and the DOTA dataset with accuracy rates of 0.863 and 0.701, respectively. The experimental results show that the accuracy of the APP algorithm detection target is significantly improved. At the same time, the algorithm can achieve one stage prediction, which makes the calculation process easier and more efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a fast and efficient method for pedestrian video segmentation. Previous methods can only use the first frame or the previous frame or a combination of the two, but in our framework, all past frames can be used by using memory network. The past frames with corresponding masks form the memory, and the current frame as the target will be segmented using the information from the memory instead of itself for only. The solution can better handle the problems such as movement and appearance changes in the video. ResUnet is used as the segmentation network to improve time efficiency. Since no dataset is publicly available yet for pedestrian video segmentation, we have internally labeled a large dataset which contains 216 sequences in the training set and 24 sequences in the test set and it will be made public in the future. We validate our method on the test set and achieved the mean IU of 92.6 which is better than using previous methods while keeping real-time(90FPS for input of 160*96 on a TITAN V).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes Object-based Loss Function in Segmented Neural Networks. Traditional Segmented Neural Network(SNN) are based on Pixel-based Back Propagation(PBP). Since the pixel ratios of the images occupied by different sizes of objects are not the same, the weight of the small objects in the segmentation is small, which means using PBP may greatly affects the accuracy of the detection when there are a large number of small objects. Considering this defect of PBP, we propose a Object-based Back Propagation(OBP) loss function weight design, that is, the back propagation weights of different objects are not equal, which is inversely proportional to the area occupied by the object. Segmented Neural Networks data set test.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical fiber communication is one of the main modes of modern communication, has developed rapidly and been widely used. Fiber optic cable fault is the primary factor of communication interruption, and about two-thirds of the statistical faults are fiber optic cable faults. In order to ensure the uninterrupted communication and repair the faults of optical cable line in time, it is necessary to locate the faults accurately and repair the faulty optical cable quickly. Converting the ground length of the faulty optical cable according to the test curve of Optical Time Domain Reflectometer and the original data can quickly find the fault point, greatly shorten the search time of the difficult fault point, and then eliminate the fault.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of urban rail transit signal technology, the demand of computer interlocking system for signal equipment is higher and higher. DS6-K5B computer interlocking system is a new computer interlocking system jointly developed by China Railway Communication Signal Research and Design Institute and Keizo Corporation of Japan. The system is two - by - two - take - two structure, with high reliability and security. Nevertheless, the failure of DS6-K5B computer interlocking system is inevitable. On the basis of defining the working principle of DS6-K5B computer interlocking system and combining with years of practical experience, the paper presents an effective troubleshooting method for DS6-K5B computer interlocking system. It is proved by practice that this method can quickly eliminate the fault and restore the system to normal working state.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generative Adversarial Networks (GANs) is one of the most promising generative model in recently years. In this paper, we proposed a model called terrain maker Generative Adversarial Networks (TMGAN). It differs from the original GANs in three points: first, based on given topographic map, TMGAN can generate corresponding satellite aerial map, and vice versa. Second, TMGAN can modeled the terrain adaptively. Third, TMGAN can predict the height map of surface environment. We collected two data sets of paired and unpaired topographic maps and satellite aerial maps to train our model and test the influence of hidden variables. In this paper, we demonstrate the three-dimensional modeling ability of TMGAN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of urban rail transit signal technology, signal system has more and more requirements for signal infrastructure equipment. Signal basic equipment mainly includes signal generator, track circuit and turnout. Switch machine is an important signal basic equipment used to reliably change the position of turnout, change the direction of turnout, lock the turnout rail, and reflect the position of turnout. It can ensure traffic safety, improve transport efficiency and improve the labor intensity of traffic personnel. For speed-up turnout, it is very efficient to realize turnout conversion by S700K electric switch machine. When the turnout is squeezed or located in the "four-way" position, the sharp rails on both sides are not close to each other. At this moment, it is easy to cause train derailment or rollover, which directly endangers the traffic safety. The maintenance of switch machine is particularly important. Maintenance of switch equipment mainly includes routine maintenance and troubleshooting. In this paper, a new method for the analysis and treatment of electrical faults of S700K electric switch machine will be put forward. By applying the new method proposed in this paper, the electrical fault of S700K electric switch machine can be eliminated in the shortest time. And the turnout can be restored to its normal working state. The new method is also applicable to the analysis and treatment of electrical faults of ZD6, ZYJ7, ZDJ9 and other types of switch machines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of Urban Rail Transit, the demand and service quality of communication system for the subway is higher and higher. Good communication system is an important factor to ensure the safe operation of Urban Rail Transit. Telephone Communication is the basic hand of train command in Urban Rail Transit, including public telephone and private telephone. At present, SPC Exchange Technology is adopted in public telephone and private telephone system in Urban Rail Transit while domestic essential equipment suppliers such as ZTE, Huawei and other companies, declared phasing out the production of SPC Exchange Equipment. Thus, a new idea would be put forward to solve this problem. Soft-switch Technology can support various forms of information. It is a new network technology which integrates voice, data, image and fax services. Soft-switch technology as a representative of scientific and technological innovation can be effectively integrated with Urban Rail Transit Communication System. And the quality of communication service can also be improved. Taking Wuhan Metro as an example, this paper discusses Soft-switch technology and its application in urban rail transit communication system. After successful pilot, it can be extended to other cities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Switch transaction device is the equipment with the highest failure rate in rail transit signaling system. Switch equipment maintenance and fault handling is the core course of rail transit signaling specialty, and also the key technology to ensure the safety of rail transit transportation. With the rapid increase of rail transit mileage, train speed and traffic density, the increase of switch equipment usage frequency leads to the increase of daily maintenance and fault handling workload, which brings great pressure to safety production. Therefore, the ability to quickly diagnose and deal with switch faults is an important guarantee for the safety of rail transit transportation. In view of these reasons, it is particularly important to strengthen the training of operation skills of students and signaling maintenance personnel on the spot, and to improve their ability to deal with equipment failures in a short time. In this paper, an intelligent assessment system based on S700K switch machine is designed and implemented, which can effectively improve the switch maintenance and fault handling level of trainees through practical training and operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The permeability coefficient (PCOEF) is an important indicator to characterize the performance of asphalt concrete pavement. The dynamic PCOEF is more helpful to understand the permeability performance of the asphalt concrete in comparison to the traditional average PCOEF. A method for dynamically testing the permeability coefficient of asphalt concrete is proposed. In this method, the camera is placed at a suitable position to ensure continuously imaging the water level. The images that record the water level changes are saved and processed in the runtime. Then, by mapping the water level positions to the capacities and recording the corresponding time, the real-time PCOEF can be calculated at each point-in-time. The experimental results show that the dynamic PCOEF for a certain specimen with a PCOEF of 531mL/min by traditional method can vary between 443-630 mL/min. The results also demonstrate that the proposed method is useful for the research of dynamic performance of asphalt concrete under rainy conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of glass thickness measurement, the traditional contact manual measurement usually tends to destroy the measured surface, which causes slow speed and low accuracy of measurement. Combining the optical properties of the laser through the glass surface, the paper proposed a novel glass thickness measurement method based on the laser triangulation method. Double line images of the line laser reflected by the up-down surfaces of the glass will form on the CCD through the camera lens. By analyzing the texture features of laser lines in images, gray-centre algorithm is used to extracted the two-dimensional coordinates of laser lines. And three-dimensional point cloud data are derived from laser triangulation formula. The spatial interpolation in specific part of point cloud to calculate the glass thickness. The experimental results show that the proposed method has repeatability and high-precision measurement of the glass thickness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Qinghai-Tibet plateau is one of the increasingly important parts of Chinese Mainland. After military tension at the boundary between China and India appeared a few years ago, China is focusing on such tasks as remote sensing of Qinghai-Tibet plateau. Qinghai-Tibet plateau is too large to collect basic dataset of meteorology, greenery, and hydrology by manual means. Space-based technique can meet such an requirement. Such payloads onboard spacecraft as lidar, radiometer, radar can provide a good solution to effectively collect dataset for a particular area of interest. With the deployment of comprehensive survey of Qinghai-Tibet plateau by China, the feasibilities assessment of implementing space project to monitor the Qinghai-Tibet plateau are impending. How to choose the most proper orbit is one of the tasks of feasibilities assessment. Herein, three sets of orbits are simulated and assessed. In case 1, a circular orbit with 250 kilometers in altitude is analyzed, and the operation orbit is sun-synchronous. According to relative simulations, orbital altitude damping rate is 10.2 kilometers per day. In order to keep the stable orbit altitude or offset the orbit altitude damping, 558 kilograms of fuel should be needed per year; 1117 kilograms of fuel should be needed to keep a stable orbit every two years. In case 2, an elliptic orbit with perigee altitude of 250km and apogee altitude of 500km is considered. Based on relative simulations, orbital altitude damping rate is 2.461 kilometers per day. In order to keep the stable orbit altitude or offset the orbit altitude damping, 130 kilograms of fuel should be needed per year; 261 kilograms of fuel should be needed to keep a stable orbit every two years. In case 3, an elliptic orbit with perigee altitude of 250km and apogee altitude of 600km is considered. Based on relative simulations, orbital altitude damping rate is 1.67 kilometers per day. In order to keep the stable orbit altitude or offset the orbit altitude damping, 87.6 kilograms of fuel should be needed per year; 175.2 kilograms of fuel should be needed to keep a stable orbit every two years. During the simulation and assessment, the ratio of area to mass of the spacecraft in question is assumed to be 0.01 square meters per kilograms; and the mass of the spacecraft is set to be 500 kilograms. As a result of trade-off between economy and payload priority of observation advantages, the case 3 is preferred to work as the operation orbit. In such an orbit, the spacecraft will contribute more efficiently to the comprehensive surveying of Qinghai-Tibet plateau.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes the importance of gear parameter detection in industrial production using digital image processing technology to detect gear parameters. In view of the shortcomings of the existing measurement methods, a method of measuring gear size parameters based on hough transform circle segmentation is proposed. The center of the gear is found by using hough transform on the contour of the gear. According to the distance from the contour point to the center point, a new coordinate system is established, expand the gear contour into a regular curve on the new coordinate. The number of crest of the curve is the number of teeth of the gear, and the ordinate of crest is the radius of the tooth tip circle of the gear. Finally, other parameters of gear are calculated according to gear formula.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to explore the rich Marine resources more efficiently and monitor the information of marine creatures in real time, this paper proposes to design a real-time Marine biological monitoring and managing system based on Java technology , which combines relevant Marine information collectors such as underwater camera. After the seabed image captured by the information collector is uploaded to the system server, the system can conduct relevant image processing to achieve a clearer image according to its fuzzy characteristics, and conduct unified classification, integration and storage to facilitate search later. Compared with the traditional method of image processing and image management segmentation, the system can save a lot of manpower and time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional measuring equipments and methods cannot satisfy the requirements of micrometer-level accuracy and realtime measurement of LED tape coating, the paper proposes a three-dimensional measurement method to compute the thickness of LED tape coating based on linear array spectral confocal. Firstly, the distance data is collected by linear array spectral confocal scanning and converted into 3D point cloud data, then the point cloud is materialized and smoothed to make the 3D object more realistic. Finally, the 3D entity is interacted in the Point Cloud Library to perform manual measurement of the tiny parts of the object. The subsequent automatic measurements are used to control the grating ruler for the specified position moving of measurement based on the previous manual measurement processes and the procedure file. The experimental results indicate that the accuracy of the proposed measurement method is less than 3um, and automatic measurement costs the processing time within 2.5s. In addition, the measurement accuracy is as high as 99.9%, which indicates that the proposed method performs a competitive result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing technology has great advantages for fire detection, meanwhile, many studies neglect the contribution of small fires which appear burned area and time uncertainly. Crop residue burning is a kind of biomass burning, and a kind of small fires as well. So far, no matter the satellite remote sensing data or detecting algorithms that cannot meet monitoring requirements of crop residue burning. It is difficult to detect and monitor. In this paper, using MODIS burned area product and active fire product, the burned area detection algorithm of crop residue burning was developed, and then, took a test in the Yangtze River Delta. Results show that developed detection algorithm not only detect crop residue burning well, but also estimate the burned area. This study about small fires has obvious progress compared with other fire detection methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The disadvantages of traditional optical detection for space targets such as time limitation and serious atmospheric effect are the main restrictions for space targets detection and recognition, especially for GEO satellites. As an optical parameter, polarization is independent of intensity and spectral, and is sensitive to materials and surface properties. Polarization detection offers distinct advantages in highlighting targets, reducing effect of atmosphere, and discriminating camouflage target. Therefore, polarization detection can enhance the detection ability and becomes the hot point of space targets detection. In this paper, the advantages of space targets polarization detection are analyzed, the development of polarization detection devices and space targets polarization observations are summarized, and the future of space targets polarization detection is forecasted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tornado disaster is not the main natural disasters in China, but it could cause serious casualties and economic losses. As a means of large-scale and dynamic monitoring, remote sensing could monitor the loss of housing and facility agriculture in tornado disaster, and could dynamic monitor the reconstruction of infrastructure such as post-disaster houses. Based on high-resolution optical remote sensing pre-disaster images, basic geographic data, GF-1, GF-2, and TripleSat Constellation satellite images, damaged house and temporary resettlement site and damaged facility agriculture have been monitored during disaster, transitional settlement site and centralized settlement site have been dynamic monitored after disaster. Through the dynamic monitoring of the tornado disaster in Yancheng City of Jiangsu Province in June 2016, the monitoring result shows that satellite remote sensing technology could help to monitor the degree of disasters and resettlement, as well as the dynamic monitor the resettlement and production recovery of residents after the disaster, which can be used as an objective way to judge the process of post-disaster recovery and reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared smoke interference technology seriously infected the combat effectiveness of photoelectric guided weapons in modern warfare. As a result of the occlusion caused by smoke screen, the robustness of image matching guidance algorithm will decrease. Thus, to judge whether there is smoke interference in images and smoke screen area extraction are of great importance for the accuracy of image matching guidance algorithm. However, most of the smoke detection methods aimed at fire early warning, so that they focused on whether smoke exists or not. While both of the discrimination of smoke interference and smoke screen area extraction are what we concern. In this paper, a smoke detection method based on superpixel segmentation and region merging is proposed. Firstly, over-segmentation regions of input infrared image with superpixel segmentation are obtained. Then, fusion texture feature of the image is computed. Finally, superpixel regions are merged based on the fusion features of each superpixel block obtained in the previous step and smoke screen area extraction is completed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes the causes of image noise in seawater and the influence of noise on the target image of UUV(unmanned underwater vehicle), and points out the shortcomings of existing methods of noise suppression. In view of the above problems, we propose a real-time noise suppression method for the target image of the UUV platform. The algorithm is divided into three steps: (1) Firstly, the image is binarized by finding an appropriate threshold based on the dispersion between classes. (2) Then, the binary image is subjected to rapid morphological processing to separate the sticky noise. (3) Finally, the target connected domain is calibrated by the four-neighbor method and the pixel values outside the target are gradually reduced based on the principle of human vision to achieve the purpose of noise suppression. Experiments and results show that the method is able to preserve the edge and details of the target well, suppress the noise, and the speed is fast, which satisfies the accuracy and timeliness required for underwater video processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of restoring images degraded by an underwater environment is challenging, in part because light traveling underwater suffers from two combined degradations, known as scattering and absorption, which leads to inaccurate transmittance estimation. In this work, we propose that underwater image dehazing and color correction algorithm based on scene depth estimation. Through scene depth estimation, we get accurate transmittance to achieve better dehazing effect. The experimental results show that our approach obtains good-quality images, with a visibility enhancement comparable or better than other recent methods. As for color recovery, We recovere among different images, regardless of the different water conditions. In this work we not only achieves the effect of underwater image dehazing, but also guarantees accuracy and timeliness of recovery results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the problem that the pointer are occluded when manual inspection of hollow pointer instruments in substations and the reading error caused by shooting angle is large, a detection method of pointer segments based on annular region segmentation is proposed. This method combines the principle of Hough gradient method and adopts multi-layer template feature matching to build a table needle template library. Firstly, the image is grayed, binarized and expanded. Secondly, the tilted instrument image is corrected by perspective transformation matrix. Finally, the outline is detected by Canny algorithm, and the pixels in the annular region are detected from outside to inside until the pointer is found. In the annular area, the pointer connecting the center of the instrument and the detected annular area is the pointer position of the instrument. The results show that the pointer instrument automatic recognition system can accurately identify the reading of the pointer. Compared with the SVM method, the accuracy of the reading is higher and the false rate of detection is lower.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Path planning is of essential importance for Unmanned Surface Vessels (USV). Lots of path planning algorithms have been proposed in the last few years, however these algorithms have high computational complexity. Therefore, these algorithms are time consuming and not suitable for online path planning. In this paper, a rapid path planning algorithm for USVs is developed. The proposed algorithm segments the searching space into three subspaces: starting subspace, end subspace and passing subspace. With consider the performances of USVs, our algorithm plans a path from the edge of the starting subspace to the end subspace. Therefore the computational complexity is dramatically decreased. The experiment results show that the proposed scheme is efficient to fulfill the path planning task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
"Intelligence" is a hot word in society at that time, which reflects people's high requirements for quality of life. Smart home that makes people comfortable, convenient, and satisfied has emerged. This research takes the cloud platform as the software platform and combines the infrared telemetry technology to design a smart home control system with STM32 as the primary system controller, ZigBee as the home intranet communication module and the sensors as the environmental data acquisition module. The communication quality of the home intranet and the data monitoring performance of the cloud platform were tested. The experimental results show that the communication quality of the system is good, and there is almost no error packet, which can meet the daily needs of users. Users can remotely monitor the home environment and control the home equipment actions on the cloud platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The black soil zone in the northeast of China is one of the three largest black soil zones in the world, and the most important cultivated area for growing food crops in China. Remote sensing can obtain regional soil information of large area more rapidly with less labor and money. One of the key issues of soil investigation is the extraction of bare soil. Hyperspectral remote sensing data have more spectral bands and nearly continuous spectral curve, indicating more detailed information of the soil properties than traditional multispectral images. By using hyperspectral data, we can obtain reliable bare soil information. This study aims to compare different bare soil extraction methods for black soil zone and analyze their feasibilities to be applied to AHSI/GF-5 data. Baoqing County in Heilongjiang Province is chosen as our study area. To perform a comprehensive and complete comparison of bare soil extraction methods, we compare 8 classical target detection algorithms and analyze the impacts of spectral dimension reduction and the spatial filter on the extraction results. The results show that it is feasible to extract bare soil information in the black soil zone based on AHSI/GF-5 hyperspectral data. MF and CEM can get the best extraction results with NPSAD and MNF through human-computer interaction parameter adjustment, while MTMF can also obtain a good extraction result without human interference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A detailed distribution map of different vegetation classes is of great importance for us to analyze the global ecosystem. Compared with traditional remote sensing data, hyperspectral remote sensing (HRS) data have hundreds of spectral bands and continuous spectral curves, showing great potential in sophisticated vegetation classification. And the AHSI (Advance Hyper-Spectral Imager) on-board GF-5 satellite has addressed the problem of lacking in satellite HRS data. According to the characteristics of AHSI data, we propose a modified sophisticated vegetation classification method by constructing and optimizing a vegetation feature set (FBS). This method takes the band quality, vegetation biochemical parameters, and neighborhood pixels’ spectral angle distance into consideration. The results show that our method can obtain better classification results than traditional methods with higher overall accuracy and less salt and pepper noise, indicating that it is feasible to distinguish different kinds of vegetation using the AHSI/GF-5 data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the current situation of statistical analysis of geographical conditions, most of them are limited to basic statistics, comprehensive statistical analysis index system and technical methods have not yet established a unified standard of the status quo, based on the geographical conditions of the follow-up to deepen the application of practical needs, put forward a set of regional geography comprehensive Statistical analysis and evaluation of the framework of the program, on the basis of the Pearl River Delta Economic Zone for the object to carry out the relevant applications. The framework can reveal the interaction and influence of regional resources, ecology, population, economy, society and other elements in the geographical space from the aspects of resource utilization, ecological civilization, social livelihood, urbanization and so on. Which can provide technical reference for the comprehensive statistical analysis work of other different units and provide new ideas for the further application of geo-national census evaluation and monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because posting on Sina Weibo has become the most popular and quickest way for spreading emergent news. However, meanwhile, the relevant management departments are obliged to deal with emergencies’ information on social media within a short time. Therefore, predicting the propagation effect of emergency microblog entries can help management departments find out the probable coming problems in time, and also improve the predictability of decision-making. In this paper, a method of predicting the propagation effect toward entries of main emergencies released by official media is proposed which has rarely been studied. We measure the propagation effect of emergency microblog entries from their repost, comment and favorite counts. In order to reach the target, user profiles, text features, interactive attributes were first extracted and verified separately. With these filtered multi-features, an improved model based on random forest is then constructed, trained and tested for predicting the publics’ interactive behaviors on Sina Weibo dataset. The results in our experiment demonstrate the effectiveness of our algorithm compared with most existing models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
UPS is a kind of energy conversion device which uses battery chemical energy as backup energy to continuously provide electric energy for equipment in case of power failure or abnormal grid failure. We call it Uninterruptible Power Supply (Abbreviated as UPS). UPS can be divided into backup UPS, online interactive UPS and online Dual-Transform UPS according to different working modes.
Wuhan Metro is currently using Kehua UPS, which is an online Dual-Transform power supply system with uninterrupted power frequency. The UPS signal power supply system of Wuhan Metro works uninterruptedly all year round. Whether it can prolong the service life is an important means to measure the maintenance level of technicians and power managers [1]. UPS, which is used in conjunction with intelligent power supply screen, has become an important part of urban rail transit power supply system. Therefore, the efficiency of daily maintenance and fault handling of UPS directly affects the working efficiency of the signal system. This paper will clarify the daily maintenance content of Kehua UPS, and propose common fault information and fault handling steps of Kehua UPS. According to the steps of fault treatment, the fault can be eliminated in the shortest time, so that the Kehua UPS can quickly resume its normal working state.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LCD is a common display device. Due to the device dependence in color space, it is necessary to characterize LCD. In this paper, polynomial regression method is used to establish the color conversion model from RGB to CIEXYZ for colorimetric characterization of LCD, and black point correction is added to solve different polynomial parameters and compare the color difference. In the experiment, 17 groups of training samples were selected to solve the parameters and 200 groups of random test samples were used to verify the accuracy of the model. The experimental results show that the cubic polynomial curve model has the highest accuracy and the maximum chromatic aberration is 4.2962, which achieves better display effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photon-Counting-Detector (PCD) has a broad application prospect in medical X-ray computed tomography (CT) and Xray (XR) imaging, which can improve contrast and spatial resolution, optimize spectral imaging, and use energy-dependent attenuation coefficient for the great potential of material composition identification. However, the measurement provided by the photon-counting-detector causes spectral distortion due to physical phenomena such as pulse pileup effect, charge sharing, K-escape and Compton scattering occurring in the detector. Since the calculation of the physical phenomenon that causes distortion is very complicated, this paper proposes a method of using the neural network for spectral correction based on Monte Carlo simulation, that is, using the Monte Carlo method to simulate the particle transport process to obtain undistorted spectrum as the label of the neural network, the spectrum is used as the input data of the neural network, and the relationship between the distortion spectrum and the corrected spectrum is learned by training the neural network. After the training is completed, using the test set for model evaluation, the standard error between the predicted result and the label was only 25.1601ppm. This method can effectively correct the spectral distortion problem of the photon-countingdetector, and can more accurately invert the X-ray spectral data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Waste recycling is very important for economy and climate balance of the world. For this reason, intelligent classifying recyclable garbage is an important goal for humanity and Deep Learning models can be used for this purpose. In this paper, a deep learning framework with different architectures, such as Densenet, Inception- Resnet-V2, MobileNet, and Xception, is tested on Trashnet dataset to provide the most efficient approach. Meanwhile, Adam is selected for optimizing neural network models. Experimental results validate that Deep learning models with the Adam optimizer could provide better a test accuracy rate compared to the Adadelta optimizer. With comparison of quantitative results obtained by those architectures in the deep learning frame- work, we can find that the DenseNet using fine-tuning can get the best result (a test accuracy rate of 95%) and the Inception-ResNet-V2 using fine-tuning is the second best (a test accuracy of 94%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Haze is the result of the interaction between specific climate and human activities. When observing objects in hazy conditions, optical system will produce degradation problems such as color attenuation, image detail loss and contrast reduction. Image haze removal is a challenging and ill-conditioned problem because of the ambiguities of unknown radiance and medium transmission. In order to get clean images, traditional machine vision methods usually use various constraints/prior conditions to obtain a reasonable haze removal solutions, the key to achieve haze removal is to estimate the medium transmission of the input hazy image in earlier studies. In this paper, however, we concentrated on recovering a clear image from a hazy input directly by using Generative Adversarial Network (GAN) without estimating the transmission matrix and atmospheric scattering model parameters, we present an end-to-end model that consists of an encoder and a decoder, the encoder is extracting the features of the hazy images, and represents these features in high dimensional space, while the decoder is employed to recover the corresponding images from high-level coding features. And based perceptual losses optimization could get high quality of textural information of haze recovery and reproduce more natural haze-removal images. Experimental results on hazy image datasets input shows better subjective visual quality than traditional methods. Furthermore, we test the haze removal images on a specialized object detection network- YOLO, the detection result shows that our method can improve the object detection performance on haze removal images, indicated that we can get clean haze-free images from hazy input through our GAN model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Geiger mode Avalanche Photo Diode (APD) array lidar is a non-scanning lidar, which has a small volume, fast imaging speed and high sensitivity. In the paper, the 3D target detection of Geiger mode APD array lidar image is studied. Geiger mode APD array lidar has great noise in the process of imaging due to its imaging characteristics. The paper analyzes its noise characteristics and decomposes the noise into four parts: environment noise, loss noise, internal noise and crosstalk noise. According to the noise characteristics, the paper simulated the Geiger-mode APD array lidar imaging. And based on this, the target detection algorithm was studied. The paper proposes a filtering method based on the KNN classification and combine an improved loop filtering algorithm to preprocess the image. And then an adaptive superposition algorithm is proposed to fuse the preprocessed multi-frame image. Testing the target detection algorithm on five image data captured by the Geiger mode APD array lidar, the medium-scale and small-scale targets can be detected in 20 frames. The largescale targets can be detected in 50 frames, and long-distance targets can be detected in 100 frames.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new image encryption algorithm, which is based on Logistic chaotic mapping and DNA coding. A variable parameter logistic mapping coupled chaotic system is proposed to generate pseudo-random sequences with good pseudo-random properties. Before encrypting plain-text images, the distinct features of some special images are eliminated by superposing them with a pseudo-random image. Combining with DNA encoding operation, an encrypting algorithm of cross-operation of diffusion and permutation is designed. The simulation results show that the proposed algorithm has large key space, high sensitivity to key and plain-text, and good robustness to resist exhaustive attack, statistic attack, differential attack, etc. It shows that the proposed method has good encryption effect and security.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Group detection is crucial component in intelligent video surveillance, which can capture crowd motion and directly apply to emergency security in complex scenes, thus it has attracted plenty of attention in the related fields. However, the existing works cannot fully utilize the deep and precise features of the crowd. Recently, with the rapid development of deep learning and the promotion of challenging datasets, crowd density estimation has achieved the desired accuracy in single image. Since density maps can provide a high-level semantic information for the crowd, in this paper, a density map assisted scene analysis method is proposed to detect the groups in crowd scenes. The main contributions in this study are threefold: (1) Using density map-based super-pixel segmentation method to obtain the multiple image patches, which are taken as the next research objects; (2) A group detection method based on multi-view clustering is proposed. The density maps are used to construct similar graphs from the aspects of interaction, spatial distribution, motion distribution and motion pattern. (3) A post-processing strategy is designed to combine the groups with higher relevance to determine the final group. The experimental results show that the method can accurately detect the groups in image sequence. Furthermore, compared with the existing methods, the proposed method achieves better performance on the CUHK Crowd Dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In view of the current problem that there are too many social vehicles, and the parking spaces can not meet the needs, an outdoor cycle three-dimensional intelligent parking lot was designed. The parking lot can double the number of parking spaces in the original land area and can provide more intelligent services than the ordinary parking lot. In addition, it can make parking and fetching more convenient for the owners. PLC control and Zigbee wireless communication is used to combine the rotating parking space with the three-dimensional garage, which supports smart card, two-dimensional code reservation, and other convenient services. This design can intelligently manage the vehicles and parking spaces in the entire parking lot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With manpower sorting system in view of the traditional logistics delivery, shipping and modern logistics system of vehicle transport task allocation in unreasonable situation, this article has carried on the AGV control system based on PLC intelligent logistics research, USES the PLC controller as a smart car (Automated Guided Vehicles, AGV) control module of Siemens industrial control software STEP7 as a platform, for the research of the system hardware design and software design, for domestic logistics enterprises with comprehensive logistics warehouse, vehicle transportation path is more complicated, In this paper, a system optimization scheme based on Floyd shortest path algorithm is proposed. Experiments verify that managers can control vehicles in real-time and master the necessary information and position of vehicles. The application of the improved Floyd shortest path algorithm in this system improves the operational efficiency of AGV path planning and reduces the task time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.