KEYWORDS: 3D modeling, Point clouds, Data modeling, RGB color model, Cameras, Error analysis, Data acquisition, Color, Crop monitoring, Atmospheric modeling
Leaf area in agricultural crops is a crucial indicator for understanding growth conditions and assessing photosynthetic efficiency. Traditional methods for measuring leaf area often involved destructive techniques, where leaves or entire plants were cut and manually measured. These methods not only reduced the yield of the destroyed crops but also required significant time and labor. In response to these challenges, this study focused on developing a method to monitor plant growth using RGBD cameras, specifically the RealSense L515 and iPhone 14 Pro, which can capture the three-dimensional structure of plants. The proposed method involves extracting plant portions from the acquired 3D point cloud data using color information characteristics and cluster classification. Subsequently, 3D models are created, and leaf area is estimated based on the surface area of these models. The experiments were conducted using artificial plants. The results showed that the method using the RealSense L515 sensor achieved an average absolute error rate as low as 6.6%, while the iPhone 14 Pro had an average absolute error rate of 10.8%. Although the RealSense L515 demonstrated better accuracy, the iPhone 14 Pro proved to be relatively usable even in outdoor environments.
KEYWORDS: Point clouds, Machine learning, Data modeling, Matrices, Airborne laser technology, Laser systems engineering, Laser scattering, Education and training, Data analysis, Time metrology
In recent years, water-related accidents caused by torrential rain have been occurring frequently. Visual search for persons requiring rescue is challenging from coast or riverbank. Due to water currents and underwater topography, search from boat is also difficult. This research aims to develop a safe, wide area and accurate target search method using point cloud data from drone. The authors focused on a LiDAR system called Airborne Laser Bathymetry (ALB) which is specialized for underwater observation. A green laser ALB, In particular, has capability to obtain underwater topography data because it is equipped with not only near-infrared laser used in conventional land surveying but also green visible laser for observing in relatively shallow water. The purpose of this study is to make it possible to identify the water surface, underwater topography, and underwater floating objects such as algae from green laser ALB point cloud data using machine learning methods. For machine learning, I use Pointnet++, a network effective for point cloud processing, and SVM (Support Vector Machine), specialized for two class classification. The Pointnet++ addresses the limitations of the previously used Pointnet by sampling local features based on point cloud distance and density for learning. In proposed method, Pointnet++ is used to input three-dimensional coordinates X, Y and Z and extract three classes: water surface, underwater topography, and floating objects. Then, by inputting the Z-axis coordinate data and backscatter data (Intensity) into the SVM, it becomes possible to detect persons requiring rescue from among the floating objects.
Various natural disasters occur on the earth. In Japan, heavy rains and earthquakes have caused particularly severe damage. We focus on landslides caused by them. This study proposes a landslide detection method using synthetic aperture radar (SAR). SAR uses microwave observations, and microwaves are reflected according to the properties of materials on the earth’s surface. In addition, microwave amplitude and phase information can be obtained, and these are used for various analyses. They are often used to detect disasters, mostly to detect changes caused by disasters. For example, change detection by differential reflection intensity, analysis of terrain variation by phase difference, and detection of material by properties of polarization. Therefore, multiple SAR data are required for disaster detection. However, in the event of a disaster, rapid detection of the damaged area is necessary. For this reason, this study investigates a method for detecting the damaged area from a single SAR data. As a research method, instance segmentation is conducted using YOLOv8. The SAR data used in the experiments were obtained for the Noto Peninsula earthquake. This disaster occurred on January 1st in 2024 in the Noto region of Ishikawa Prefecture and caused extensive damage. Images of landslide areas were obtained from SAR data, annotated and trained instance segmentation by YOLOv8 to evaluate test performance.
Ground deformation can be detected by processing SAR (Synthetic Aperture Radar) phase data acquired in different periods. However, due to the characteristics of SAR, it is difficult to determine the direction of ground deformation as the distance change between the satellite and the ground surface is observed. Therefore, on-site field observation is required since SAR observation results differ from the actual amount of ground deformation. This study aims to estimate ground deformation over a wide area using satellite SAR data, understand the disaster situation quickly, and reduce secondary damage risks caused by on-site field observation. In this paper, Interferometric SAR (InSAR) analysis is applied to estimate ground deformation caused by Kumamoto earthquake in 2016 from C-band SAR data on Sentinel-1 satellite. 2.5-dimensional analysis is conducted by combining the InSAR analysis results of the ascending and descending orbits, and the direction of ground deformation caused by earthquake is visualized using displacement vectors. Furthermore, changes in land cover classification, which classifies land based on surface vegetation and geology is performed by using time-series analysis based on machine learning techniques from optical sensor images obtained from Sentinel-2. The results show that the accurate understanding of the damage situation over a wide area is very effective in terms of estimating landslides and speeding up disaster response, such as evacuation.
In recent years, natural disasters have caused serious damage. In particular, landslides caused by earthquakes are damaging. However, it is difficult to predict when and where natural disasters will occur. Therefore, this study was conducted on early detection of landslides. SAR (Synthetic Aperture Radar) is a remote sensing technology. It uses microwaves and can observe day and night in all weather conditions. But this SAR data is a grayscale image, which is difficult to analyze without specialized knowledge. Therefore, we decided to use machine learning to detect changes in disasters that appear in SAR data. There are two machine learning models called pix2pix and pix2pixHD for image transformation. The objective of this study is to detect changes of surface by transforming pseudo-optical images from SAR data using machine learning. Two machine learning models were used for training, with test images and actual disaster data input. Simple terrain, such as forests only, was highly accurate, but complex terrain was difficult to generate. About actual disaster data, something like disaster-induced changes appeared in the converted images. However, we found it difficult to distinguish bare area from grassland in the output images. In the future, it is necessary to consider the combination of data to be used for learning.
Atmospheric particulate matters (PM) are tiny pieces of solid or liquid matter associated with the Earth’s atmosphere. They are suspended in the atmosphere as atmospheric aerosol. Recently, density of fine particles PM2.5, diameter of 2.5 micrometers or less, from China is serious environmental issue in East part of Asia. In this study, the authors have developed a PM2.5 density distribution visualization system using ground-level sensor network dataset and Mie lidar dataset. The former dataset is used for visualization of horizontal PM2.5 density distribution and movement analysis, the latter dataset is used for visualization of vertical PM2.5 density distribution and movement analysis.
The authors have developed HuVisCam, a human vision simulation camera, that can simulate not only Purkinje
effect for mesopic and scotopic vision but also dark and light adaptation, abnormal miosis and abnormal mydriasis
caused by the influence of mydriasis medicine or nerve agent This camera consists of a bandpass pre-filter, a
color USB camera, an Illuminator and a small computer. In this article, improvement of HuVisCam for specific
color perception is discussed. For persons with normal color perception, simulation function of various types of
specific color perception is provided. In addition, for persons with specific color perception, color information
analyzing function is also provided.
KEYWORDS: Data acquisition, LIDAR, Meteorology, Data modeling, Ozone, Atmospheric modeling, Data analysis, Cameras, Satellites, Magnetic resonance imaging
A web-base data acquisition and management system for GOSAT (Greenhouse gases Observation SATellite) validation lidar data-analysis has been developed. The system consists of data acquisition sub-system (DAS) and data management sub-system (DMS). DAS written in Perl language acquires AMeDAS (Automated Meteorological Data Acquisition System) ground-level local meteorological data, GPS Radiosonde upper-air meteorological data, ground-level oxidant data, skyradiometer data, skyview camera images, meteorological satellite IR image data and GOSAT validation lidar data. DMS written in PHP language demonstrates satellite-pass date and all acquired data. In this article, we briefly describe some improvement for higher performance and higher data usability. GPS Radiosonde upper-air meteorological data and U.S. standard atmospheric model in DAS automatically calculate molecule number density profiles. Predicted ozone density prole images above Saga city are also calculated by using Meteorological Research Institute (MRI) chemistry-climate model version 2 for comparison to actual ozone DIAL data.
Greenhouse gases Observation SATellite (GOSAT) was launched to enable the precise monitoring of the density
of carbon dioxide by combining global observation data sent from space with data obtained on land, and with
simulation models. In addition, observation of methane, another greenhouse gas, has been considered. For
validation of GOSAT data products, ground-base observation with Fourier Transform Spectrometer (FTS),
aerosol lidar and ozone-DIAL (DIfferencial Absorption Lidar) at Saga University, JAPAN has been continued
since March, 2011. In this article, observation results obtained from aerosol lidar are reported.
An web-base data acquisition and management system for GOSAT (Greenhouse gases Observation SATellite)
validation lidar data analysis is developed. The system consists of data acquisition sub-system (DAS) and data
management sub-system (DMS). DAS written in Perl language acquires AMeDAS ground-level meteorological
data, Rawinsonde upper-air meteorological data, ground-level oxidant data, skyradiometer data, skyview camera
images, meteorological satellite IR image data and GOSAT validation lidar data. DMS written in PHP language
demonstrates satellite-pass date and all acquired data.
KEYWORDS: Device simulation, Cameras, Cones, Rods, Luminous efficiency, Human vision and color perception, Medicine, Nerve agents, Retina, Imaging systems
HuVisCam, a human vision simulation camera, that can simulate not only Purkinje effect for mesopic and scotopic
vision but also dark and light adaptation, abnormal miosis and abnormal mydriasis caused by the influence of
mydriasis medicine or nerve agent is developed. In this article, details of the system are described.
A new change detection method for remotely sensed images is proposed. This method can be applied to two
images which have different number of spectral bands and/or have different spectral ranges. The proposed
method converts two multi-spectral-multi-temporal images into two sets of canonical variate images which have
limited correlation called the canonical correlation. Then, one or more canonical variate images which are the
most suitable for change detection are selected and change detection regions in the original images are extracted
by using statistical modeling and statistical test. In this paper, the detail of the proposed method is described.
Some experiments using simulated multi-spectral-multi-temporal images based on spectral profiles in ASTER
Spectral Library are conducted to confirm change detection accuracy. The experimental results show reasonable
changed regions and their change quantities.
KEYWORDS: Thermography, Data acquisition, Cameras, Image retrieval, Temperature metrology, Data conversion, Imaging devices, Data storage, Human-machine interfaces, Control systems
TZ-SCAN is a simple and low cost thermal imaging device which consists of a single point radiation thermometer
on a tripod with a pan-tilt rotator, a DC motor controller board with a USB interface, and a laptop computer for
rotator control, data acquisition, and data processing. TZ-SCAN acquires a series of zig-zag scanned data and
stores the data as CSV file. A 2-D thermal distribution image can be retrieved by using the second quefrency
peak calculated from TZ-SCAN data. An experiment is conducted to confirm the validity of the thermal retrieval
algorithm. The experimental result shows efficient accuracy for 2-D thermal distribution image retrieval.
"HYCLASS", a new hybrid classification method for remotely sensed multi-spectral images is proposed. This
method consists of two procedures, the textural edge detection and texture classification. In the textural edge
detection, the maximum likelihood classification (MLH) method is employed to find "the spectral edges", and
the morphological filtering is employed to process the spectral edges into "the textural edges" by sharpening the
opened curve parts of the spectral edges. In the texture classification, the supervised texture classification method
based on normalized Zernike moment vector that the authors have already proposed. Some experiments using a
simulated texture image and an actual airborne sensor image are conducted to evaluate the classification accuracy
of the HYCLASS. The experimental results show that the HYCLASS can provide reasonable classification results
in comparison with those by the conventional classification method.
At the previous conference, the authors are proposed a new unsupervised texture classification method based on the genetic algorithms (GA). In the method, the GA are employed to determine location and size of the typical textures in the target image. The proposed method consists of the following procedures: 1) the determination of the number of classification category; 2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed;
6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. However, this method has not been automated because it requires not only target image but also the number of categories for classification. In this paper, we describe some improvement for implementation of automated texture classification. Some experiments are conducted to evaluate classification capability of the proposed method by using images from Brodatz's photo album and actual airborne multispectral scanner. The experimental results show that the proposed method can select appropriate texture samples and can provide reasonable classification results.
A new unsupervised texture classification method based on the genetic algorithms (GA) is proposed. In the method, the GA are employed to determine location and size of the typical textures in the target image. The proposed method consists of the following procedures: (1) the determination of the number of classification category; (2) each chromosome used in the GA consists of coordinates of center pixel of each training area candidate and those size; (3) 50 chromosomes are generated using random number; (4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); (5) in the selection operation in the GA, the elite preservation strategy is employed; (6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; (7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; (8) go to the procedure 4. Some experiments are conducted to evaluate classification capability of the proposed method by using images from Brodatz's photo album and actual airborne multispectral scanner. The experimental results show that the proposed method can select appropriate texture samples and can provide reasonable classification results.
A new method for selection of appropriate training areas which are
used for supervised texture classification is proposed. In the method, the genetic algorithms (GA) are employed to determine the appropriate location and the appropriate size of each texture category's training area. The proposed method consists of the following procedures: 1) the determination of the number of classification category and those kinds; 2) each chromosome
used in the GA consists of coordinates of center pixel of each training area candidate and those size; 3) 50 chromosomes are generated using random number; 4) fitness of each chromosome is calculated; the fitness is the product of the Classification Reliability in the Mixed Texture Cases (CRMTC) and the Stability of NZMV against Scanning Field of View Size (SNSFS); 5) in the selection operation in the GA, the elite preservation strategy is employed;
6) in the crossover operation, multi point crossover is employed and two parent chromosomes are selected by the roulette strategy; 7) in mutation operation, the locuses where the bit inverting occurs are decided by a mutation rate; 8) go to the procedure 4. Some experiments are conducted to evaluate searching capability of appropriate training areas of the proposed method by using images from Brodatz's photo album and their rotated images. The experimental results show that the proposed method can select appropriate training areas much faster than conventional try-and-error method. The proposed method has been also applied to supervised texture classification of airborne multispectral scanner images. The experimental results show that the proposed method can provide appropriate training areas for reasonable classification results.
An automated method that can select corresponding point candidates is developed. This method has the following three features: 1) employment of the RIN-net for corresponding point candidate selection; 2) employment of multi resolution analysis with Haar wavelet transformation for improvement of selection accuracy and noise tolerance; 3) employment of context information about corresponding point candidates for screening of selected candidates. Here, the 'RIN-net' means the back-propagation trained feed-forward 3-layer artificial neural network that feeds rotation invariants as input data. In our system, pseudo Zernike moments are employed as the rotation invariants. The RIN-net has N x N pixels field of view (FOV). Some experiments are conducted to evaluate corresponding point candidate selection capability of the proposed method by using various kinds of remotely sensed images. The experimental results show the proposed method achieves fewer training patterns, less training time, and higher selection accuracy than conventional method.
The lattice strain in excimer laser crystallized polycrystalline Si thin films reflects the grain growth induced by the laser irradiation. In this report, the measurement of the lattice strain is made by using the energy-dispersive grazing-incidence x-ray diffraction with synchrotron radiation. The excimer laser crystallized poly- Si thin films show tensile lattice strain in the directions parallel to the substrate surface. The strain increases from 2.2 X 10-3 to 5.0 X 10-3 when the grain size increase from 40 to 200 nm. The strain is anisotropic between the strain and the strain in the layer near the substrate interface when the grain size is small. Carrier mobility in a thin film transistor tends to increase when the strain increases and the anisotropy decreases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.