本文主要是介绍[MOCO] Momentum Contrast for Unsupervised Visual Representation Learning,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
1、目的
无监督表示学习在自然图像领域已经很成功,因为语言任务有离散的信号空间(words, sub-word units等),便于构建tokenized字典
现有的无监督视觉表示学习方法可以看作是构建动态字典,字典的“keys”则是从数据(images or patches)中采样得到的,并用编码网络来代表
构建的字典需要满足large和consistent as they evolve during training这两个条件
2、方法
Momentum Contrast (MoCo)
1)contrastive learning
dictionary look-up
-> loss: info NCE
-> momentum
the dictionary is dynamic: the keys are randomly sampled, and the key encoder evolves during training
2)dictionary as a queue
-> large: decouple the dictionary size (can be set as a hyper-parameter) from the mini-batch size
-> consistent: the encoded representations of the current mini-batch are enqueued, and the oldest are dequeued.
the dictionary keys come from the preceding several mini-batches, slowly progressing key encoder, momentum-based moving average of the query encoder
3)momentum update
-> 只有的参数是通过back-propagation更新的
-> 尽管不同mini-batch中的key是用不同的encoder编码的,这些encoder之间的差异比较小
4)pretext task
instance discrimination: a query matches a key if they are encoded views (e.g. different crops) of the same image
5)shuffling BN
perform BN on the samples independently for each GPU,以防intra-batch communication among samples造成信息泄露
这篇关于[MOCO] Momentum Contrast for Unsupervised Visual Representation Learning的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!