Semantic segmentation is an important and foundational task in the application of high-resolution remote sensing images (HRRSIs). However, HRRSIs feature large differences within categories and minor variances across categories, posing a significant challenge to the high-accuracy semantic segmentation of HRRSIs. To address this issue and obtain powerful feature expressiveness, a deep conditional generative adversarial network (DCGAN), integrating fully convolutional DenseNet (FC-DenseNet) and Pix2pix, is proposed. The DCGAN is composed of a generator–discriminator pair, which is built on a modified downsampling unit of FC-DenseNet. The proposed method possesses strong feature expression ability because of its skip connections, the very deep network structure and multiscale supervision introduced by FC-DenseNet, and the supervision from the discriminator. Experiments on a Deep Globe Land Cover dataset demonstrate the feasibility and effectiveness of this approach for the semantic segmentation of HRRSIs. The results also reveal that our method can mitigate the influence of class imbalance. Our approach for precise semantic segmentation can effectively facilitate the application of HRRSIs.
There are affordable houses that violation of construction like construction disorder and less construction in China, which will cause wasting and misappropriating the government investment. Thus, it needs to be verified. In view of the large number and wide distribution of residential housing, as well as traditional artificial verification methods are too time-consuming and laborious. In this paper, based on the characteristics of high-width and wide-width of GF-1 satellite, a fast verification method of affordable house from GF-1 panchromatic image by geographic constraint is proposed. First, use the Morphological Building Index (MBI) method to extract the building features of the entire study region, and crop out image blocks by using the affordable house’s vector data that measured by GPS for geographic constraints. Secondly, for the local features of each image block, combined with the Canny operator and the adaptive Mean-shift image segmentation algorithm to extract the buildings within the image block. Finally, based on the overlap rate between the building extraction result and the vector data, it is judged whether the location of the affordable house exists. Experiments show that the building extraction module of this paper can effectively extract the buildings in the image blocks on the GF-1 panchromatic image and is better than the MBI method, which can effectively realize the verification of the affordable houses.
Low-altitude unmanned aerial vehicles (UAV) are widely used to acquire aerial photographs, some of which are oblique and have a large angle of view. Precise, automatic registration of such images is a challenge for conventional image processing methods. We present an affine scale-invariant feature transform (ASIFT)-based method that can register UAV oblique images at a subpixel level. First, we used the ASIFT algorithm to collect initial feature points. Positions of the feature points on corresponding local images were then corrected using the weighted least square matching (WLSM) method. Mismatching points were discarded and a local transform model was estimated using the adaptive normalized cross correlation algorithm, which also provides initial parameters for WLSM. Experiments show that sufficient feature points are collected to successfully register, to the subpixel level, UAV and other images with large angle-of-view variations and strong affine distortions. The proposed method improves the matching accuracy of previous UAV image registration methods.
The wind turbine is a device that converts the wind’s kinetic energy into electrical power. Accurate and automatic extraction of wind turbine is instructive for government departments to plan wind power plant projects. A hybrid and practical framework based on saliency detection for wind turbine extraction, using Google Earth image at spatial resolution of 1 m, is proposed. It can be viewed as a two-phase procedure: coarsely detection and fine extraction. In the first stage, we introduced a frequency-tuned saliency detection approach for initially detecting the area of interest of the wind turbines. This method exploited features of color and luminance, was simple to implement, and was computationally efficient. Taking into account the complexity of remote sensing images, in the second stage, we proposed a fast method for fine-tuning results in frequency domain and then extracted wind turbines from these salient objects by removing the irrelevant salient areas according to the special properties of the wind turbines. Experiments demonstrated that our approach consistently obtains higher precision and better recall rates. Our method was also compared with other techniques from the literature and proves that it is more applicable and robust.
The tremendous success of deep learning models such as convolutional neural networks (CNNs) in computer vision provides a method for similar problems in the field of remote sensing. Although research on repurposing pretrained CNN to remote sensing tasks is emerging, the scarcity of labeled samples and the complexity of remote sensing imagery still pose challenges. We developed a knowledge-guided golf course detection approach using a CNN fine-tuned on temporally augmented data. The proposed approach is a combination of knowledge-driven region proposal, data-driven detection based on CNN, and knowledge-driven postprocessing. To confront data complexity, knowledge-derived cooccurrence, composition, and area-based rules are applied sequentially to propose candidate golf regions. To confront sample scarcity, we employed data augmentation in the temporal domain, which extracts samples from multitemporal images. The augmented samples were then used to fine-tune a pretrained CNN for golf detection. Finally, commission error was further suppressed by postprocessing. Experiments conducted on GF-1 imagery prove the effectiveness of the proposed approach.
An urban new construction land parcel detection method based on normalized difference vegetation index (NDVI) and built-up area presence index is proposed for high-resolution remote sensing images. The method consists of three main steps: construction land detection using NDVI and PanTex, false change removal, and new construction land parcel extraction. More specifically, a change proportion index is raised to convert the pixel-based change detection map to parcels in combination with a segmentation process. From experimental results validated using two cases of high-resolution optical satellite images, the proposed method is demonstrated to be efficient and achieves a per-object overall accuracy rate beyond 95%, significantly superior to the traditional postclassification change detection method. Furthermore, the proposed method avoids errors resulting from classification in the method of postclassification comparison.
KEYWORDS: LIDAR, Visual process modeling, Clouds, Remote sensing, Data modeling, Systems modeling, Airborne laser technology, Data conversion, Data processing, Raster graphics
Urban environment is extremely complex due to a multitude of features with different heights and structures. Traditional
methods available to extract information regarding the buildings by using optical remote sensing images are highly
labor-intensive and time-consuming. This paper developed a new method to detect building outlines based on height and
intensity information of Airborne LiDAR data. Texture, relative height and intensity characteristics were first extracted
from the LiDAR point cloud. Then, Support Vector Data Description was used to detect buildings with training
knowledge. Finally, building outlines were obtained after data post process including small region removal, raster to
vector conversion and so on. Experiments show the method proposed in this study is reliable and could be widely used in
other urban areas.
Urban building extraction is an important research topic in urban studies. We present a three-step method to detect the
outline of buildings, Firstly, DEM (Digital Elevation Model) is separate from DSM (Digital Surface Model), in our
algorithm, and surface based method is adopted. The second step is to classify buildings and other off-terrain objects.
Texture characters are selected to classify them. Four parameters (Contrast, Energy, Entropy, and Homogeneity) are
defined to describe textures. Rough set method is used to distinguish between buildings and non-buildings based on
knowledge gained from training data. Finally, we convert images which buildings are detected to polygons, building
outlines are obtained. The data set we use in this paper is located in Ada County of Idaho State, USA. Experiments show
building detection rate with our method is more than 85%. It shows the method adopted in this paper is feasible.
KEYWORDS: LIDAR, Data modeling, Image filtering, Visual process modeling, Image segmentation, Remote sensing, Image fusion, 3D modeling, Systems modeling, Data acquisition
The paper presents a new automatic method to detect the outline of buildings based on height and Intensity from
Airborne LiDAR system. The main idea of the method is to detect outline of buildings using SVM (Support vector
machine) training knowledge obtained from the artificial interpretation of the training data. Support vector machines
(SVMs) are a set of related supervised learning methods used for classification and regression. Experiments using real
data are presented and show the feasibility of the suggested approach.
Laser scanning is a fast and precise technique for resampling the earth into irregular pattern. The large number of laser
points hitting planar facades in urban areas makes it possible to extract objects (buildings, vegetation, etc.) in the areas.
This paper has presented a new approach to extract buildings through analyzing characteristic of contours generated
from LIDAR point data, which has been proved to be a robust and reliable method.
Clouds not only hide the ground but also cast their shadows on the ground, so a fast and automatic method to remove clouds and their shadows from the acquired satellite images is necessary. In this paper, a multispectral image fusion scheme to detect and remove clouds and their shadows is proposed. The entire algorithm is mainly composed of four steps: stationary wavelet transform, detection of clouds and their shadows, image fusion, inverse stationary wavelet transform. The simulation experiment shows that our method is valid and performs well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.