When the model begins a new task, the challenge of naming the "catastrophic forgetting" limits the scalability of the deep learning network, which quickly forgets the learning capabilities it has. The fine-tuning method recommends that the original feature extraction be retained to extract the features of the new task and to achieve the purpose of learning the new class. However, this method degrades performance on previously learned tasks because the shared parameters change without new guidance for the original task-specific prediction parameters. This paper proposes general fine-tune method to reduce catastrophic forgetting in sequential task learning scenarios. The critical idea of the method is fine-tuning the parameters in each layer, unlike the traditional fine tuning only for the last layer. The experimental results show that the new method is superior to fine-tune, in the accuracy of the old task and the performance of the new task is better than that of the EWC. A distinct advantage is that old tasks do not limit the performance of new tasks but provide some support for new tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.