As we all know, COVID-19 is causing more and more human infections and deaths. In order to quickly and efficiently detect COVID-19, this paper has firstly proposed a detection framework based on reinforcement learning for COVID-19 diagnosis. We use the accuracy of the validation set as the reward value, and obtain the initial model for the next epoch by searching the model corresponding to the maximum reward value in each epoch. We also have proposed a prediction framework that integrates multiple detection frameworks using parameter sharing to predict the progression of patients’ disease. We experimented with our own dataset screened by professional physicians and obtained more excellent results. In external validation, we still achieved a high accuracy rate without additional training. Finally, the experimental results show that our classification accuracy can reach 96.81%, and the precision, sensitivity, specificity, and AUC (Area Under Curve) are 95.47%, 98.64%, 95.91%, and 0.9698, respectively. The accuracy of external verification can reach 93.04% and 90.85%. The accuracy of our prediction framework is 91.04%. A large number of experiments have proved that our proposed method is effective and robust for COVID-19 detection and prediction.
In recent years, deep convolutional features have been deployed in discriminative correlation filters (DCF) to boost object tracking performance. However, features captured from pre-trained classification networks are usually trained for image classification tasks, not object tracking. In this paper, we find that different convolutional feature channels play different roles in tracking different targets. Some feature channels are favorable for tracking a given target and can be acquired based on this target, some are irrelevant to track this target, and some can be the primary cause of trackers' performance degradation when tracking this target. Thus, we perform feature selection before learning correlation filters for object tracking, and the feature selection module is realized by reinforcement learning. We penalize the features non-positive to obtain a DCF tracker based on positive convolutional feature channels. Compared with DCF based trackers without a feature selection technique, our scheme improves the robustness of target representation, lessens the dimension of activations, and achieves better tracking performance. Extensive experiments on the OTB dataset demonstrate our feature selection scheme is simple, robust, and effective for DCF based trackers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.