Different from object tacking on the ground, underwater object tracking is challenging due to the image attenuation and distortion. Also, challenges are increased by the high-freedom motion of targets under water. Target rotation, scale change, and occlusion significantly degenerate the performance of various tracking methods. Aiming to solve above problems, this paper proposes a multi-scale underwater object tracking method by adaptive feature fusion. The gray, HOG (Histogram of Oriented Gradient) and CN (Color Names) features are adaptively fused in the background-aware correlation filter (BACF) model. Moreover, a novel scale estimation method and a high-confidence model update strategy are proposed to comprehensively solve the problems caused by the scale changes and background noise influences. Experimental results demonstrate that the success ratio of the AUC criterion is 64.1% that is better than classic BACF and other methods, especially in challenging conditions.
Medical imaging, used for both diagnosis and therapy planning, is evolving towards multi-modality acquisition protocols. Manual segmentation of 3D images is a tedious task and prone to inter- and inter-experts variability. Moreover, the automatic segmentation exploiting the characteristics of multi-modal images is still a difficult problem. Towards this end, Positron emission tomography (PET) and computed tomography (CT) are widely used. PET imaging has a high contrast but often leads to blurry tumor edges due to its limited spatial resolution, while CT imaging has a high resolution but a low contrast between a tumor and its surrounding normal soft tissues. Tumor segmentation from either a single PET or CT image is difficult. It is known that co-segmentation methods utilizing the complementary information between PET and CT can improve the segmentation accuracy. This complementary information can be either consistent or inconsistent in the image level. How to correctly localize tumor edges with the inconsistent information is one major challenge for co-segmentation. Aiming to solve this problem, a novel joint level set model is proposed to combine the evidences of PET and CT in a united energy form, achieving a co-segmentation in these two modalities. The convergence of the co- segmentation model corresponds to the most optimal tradeoff between the PET and CT. The different characteristics in these two imaging modalities are considered in the adaptive convergence process which starts mostly with the PET evidence to constrain the tumor location and stops mostly with the CT evidences to delineate boundary details. The adaptability of our proposed model is automatically realized by stepwise moderating the joint weights during the convergence process. The performance of the proposed model is validated on 20 nonsmall cell lung tumor PET-CT images. It achieves an average dice similarity coefficient (DSC) of 0.846±0.064 and positive predictive value (PPV) of 0.889±0.079, demonstrating the high accuracy of the proposed model for PET-CT images lung tumor co-segmentation.
The Drosophila visual system is extremely sensitive to moving targets, which provides a wealth of biological inspiration for the research of target motion perception in complex scenes, and also lays a biological theoretical foundation for the establishment of artificial drosophila visual neural networks. Drosophila's vision has been extensively studied in physiology, anatomy, and behavior, but our understanding of its underlying neural computing is still insufficient. In order to gain insight into the neural mechanism in Drosophila vision and take better advantage of its superiority in motion perception, we propose a Drosophila vision-inspired model, which constructs a complete Drosophila visual motion perception system by integrating continuous computing layers. Our hybrid model can fully demonstrate the motion perception process in Drosophila vision. In addition, the Drosophila vision-inspired model can also be exploited to salient object detection in dynamic scenes. This novel salient object detection model is different from the previous in that it can accurately identify the motion of interest (MOI) while suppressing background disturbances and ego-motion. Comprehensive evaluations using standard benchmarks demonstrate the superiority of our model in salient object detection compared with the state-of-the-art methods.
For vibration signals measured at different speeds or loads, the different states of bearings will have a considerable internal variability, which further increases the difficulty of extracting fault signal consistency features. We believe that in order to improve the performance of feature learning and classification, a more comprehensive and extensive extraction and fusion of signals is needed. However, existing multiscale multi-stream architectures rely on contacting features at the deepest layers, which stack multiscale features by brute force but do not allow for a complete fusion. This paper proposes a novel multiscale shared learning network (MSSLN) architecture to extract and classify the fault feature inherent in multiscale factors of the vibration signal. The merits of the proposed MSSLN include the following: 1) multistream architecture is used to learn and fuse multiscale features from raw signals in parallel. 2) the shared learning architecture can fully exploit the shared representation with the consistency across multiscale factors. These two positive characteristics help MSSLN make a more faithful diagnosis in contrast to existing single-scale and multiscale methods. Extensive experimental results on Case Western Reserve dataset demonstrate that the proposed method has high accuracy and excellent generalization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.