In an era of immense data generation, unlocking the full potential of Machine Learning (ML) hinges on overcoming the limitations posed by the scarcity of labeled data. In Computer Vision (CV) research, algorithm design must consider this shift and focus instead on the abundance of unlabeled imagery. In recent years, there has been a notable trend within the community toward Self-Supervised Learning (SSL) methods that can leverage this untapped data pool. ML practice promotes self-supervised pre-training for generalized feature extraction on a diverse unlabeled dataset followed by supervised transfer learning on a smaller set of labeled, application-specific images. This shift in learning methods elicits conversation about the importance of pre-training data composition for optimizing downstream performance. We evaluate models with varying measures of similarity between pre-training and transfer learning data compositions. Our findings indicate that front-end embeddings sufficiently generalize learned image features independent of data composition, leaving transfer learning to inject the majority of application-specific understanding into the model. Composition may be irrelevant in self-supervised pre-training, suggesting target data is a primary driver of application specificity. Thus, pre-training deep learning models with application-specific data, which is often difficult to acquire, is not necessary for reaching competitive downstream performance. The capability to pre-train on more accessible datasets invites more flexibility in practical deep learning.
|