Abstract:
Latest developments in natural language processing demonstrate remarkable progress in the code-text retrieval problem. As Transformer-based models used for this task continue to increase in size, the computational costs and time required for end-to-end fine-tuning become substantial. This poses a significant challenge for adapting and utilizing these models when computational resources are limited. Motivated by these concerns, we propose a fine-tuning framework that leverages parameter-efficient fine-tuning (PEFT) techniques. Moreover, we adopt contrastive learning objectives to improve the quality of bimodal representations learned by Transformer-based models. Additionally, for PEFT methods we provide extensive benchmarking, the lack of which has been highlighted as a crucial problem in the literature. Based on extensive experiments with the CodeT5+ model conducted on two datasets, we demonstrate that the proposed fine-tuning framework has the potential to improve code-text retrieval performance by tuning only 0.4% parameters at the most.
Key words and phrases:Code retrieval, PEFT, CodeT5+, contrastive learning, NLP.