本文主要是介绍持续学习-Towards reusable network components by learning compatible representations-arxiv2020,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
Abstract
This paper proposed to make a first step towards compatible and hence reusable network components. Split a network into two components: a feature extractor and a target task head. 最终验证在三个应用上,unsupervised domain adaptation, transferring classifiers across feature extractors with different architectures, and increasing the computational efficiency of transfer learning.(对应的三个任务为domain adaptation, classifier transferability and efficient transfer learning)
Introduction
We believe that a general way to achieve network reusability is to build a large library of compatible components which are specialized for different tasks;
We make a first step in this direction by devising a training procedure to make the feature representations learnt on different tasks become compatible, without any post-hoc fine-tuning;
The compatibility of components saves the designer the effort to make them work together in a new combination, so they are free to focus on designing ever more complex models.
We say two networks are compatible if we can recombine the feature extractor of one network with the task head of the other while still producing good predictions, directly without any fine-tuning after recombination;
Method
Sec.3 introduce three ways to alter the training procedure of neural networks to encourage compatibility. Compatibility的定义和讨论:
Conclusion
We have demonstrated that we can train networks to produce compatible features, without compromising accuracy on the original tasks.
Key points: 提了一个新步骤,学习一堆兼容性强的网络组件;针对每个任务只需要组合这些组件;如何定义compatible,全文围绕compatibility来写(方法布局和实验部分布局);
这篇关于持续学习-Towards reusable network components by learning compatible representations-arxiv2020的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!