This paper presents a semantic scene modeling technique for constructing a cloud-based aquaculture surveillance system using an autonomous drone. The emergence of low-cost drones has created opportunities to find new solutions for a number of problems of computer vision and artificial intelligence based internet-of-things (AIoT). However, vision based activity detection using a mobile RGB camera still remains as a challenging task since the activities in different regions of the scene to be monitored are quite different. Moreover, the sizes of detected objects using a drone are often very small. In this work, the 3D model of an aquaculture environment is first constructed using the calibrated intrinsic camera parameters, the depth maps and the pose parameters of frames in the captured video using a drone. Next, our semantic scene modeling algorithm represents the visual and geometrical information of the semantic objects which defines the checkpoints for routine data gathering and environmental inspection. To associate each checkpoint with the GPS signal and the altitude value of the drone, our approach combines the automatic drone navigation, computer vision and machine learning algorithms to detect the checkpoint specific activities. The scene modeling algorithm transfers the essential knowledge to the mobile drone through the aquaculture cloud for monitoring the fish, persons, nets and feeding systems in an aquaculture site on a daily basis. Thus, the drone becomes a flyable intelligent robot that helps the manager of an aquaculture site to automatically collect valuable data that are important in optimization fish production using further decision making algorithms. Experiments show that our approach attains very high performance yielding significant semantics-based activity recognition accuracy without sacrificing the operation speed.
This paper presents a novel deep-learning approach to analyze the fish feeding intensity based on the images of fish tanks during the fish feeding process. The grade of the fish feeding intensity is an important indicator of fish appetite. On the design of a smart feeding system in aquaculture, this information is of great significance for guiding feeding and optimizing the fish production. However, conventional fish appetite assessment methods are inefficient and subjective. To solve these problems, in this study, based on a space-time two-stream 3D CNN, a deep learning approach for grading fish feeding intensity is proposed to evaluate fish appetite. The flow of the approach is implemented as follows. First, a fixed RGB camera is setup to capture the videos from the fish tanks during the feeding processes. This also constructs a dataset for training the two-stream neural network, and the fish appetite levels are graded using the trained neural network model. Finally, the performance of the method is evaluated and compared with other CNN-based deep learning approaches. The results show that the grading accuracy reached 91.18%, which outperforms the compared CNN-based approaches. Thus, the model can be used to detect and evaluate fish appetite to guide production practices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.