Abstract:
An overview of deep metric learning methods is presented. These methods have appeared in recent years, but were compared only with their predecessors, using neural networks of currently obsolete architectures to learn embeddings (on which the metric is calculated). The described methods were compared on different datasets from several domains, using pre-trained neural networks comparable in performance to SotA (state of the art): ConvNeXt for images, DistilBERT for texts. Labeled data sets were used, divided into two parts (train and test) in such a way that the classes did not overlap (i.e., for each class its objects are fully in train or fully in test). Such a large-scale honest comparison was made for the first time and led to unexpected conclusions: some “old” methods, for example, Tuplet Margin Loss, are superior in performance to their modern modifications and methods proposed in very recent works.
Keywords:Machine learning, deep learning, metric, similarity.
UDC:519.7
Presented:A. A. Shananin Received: 30.06.2023 Revised: 19.09.2023 Accepted: 15.10.2023