Unlocking Parameter-Efficient Fine-Tuning for Low-Resource Language Translation

Published in 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2024

This paper is still working on a camera-ready version. The complete text will be out latest on June.

Abstract

Parameter-efficient fine-tuning (PEFT) methods are increasingly vital in adapting large-scale pre-trained language models for diverse tasks, offering a balance between adaptability and computational efficiency. They are important in Low-Resource Language (LRL) Neural Machine Translation (NMT) to enhance translation accuracy with minimal resources. However, their practical effectiveness varies significantly across different languages. We conducted comprehensive empirical experiments with varying LRL domains and sizes to evaluate the performance of 8 PEFT methods with in total of 15 architectures using the SacreBLEU score. We showed that the Houlsby+Inversion adapter outperforms the baseline, proving the effectiveness of PEFT methods.

Contribution

  • Spearheaded empirical experiments assessing the efficacy of parameter-efficient fine-tuning (PEFT) methods across 15 architectures in LRL NMT.
  • Demonstrated the superior performance of the Houlsby+Inversion adapter in LRL NMT, showcasing the effectiveness of PEFT methods in improving translation accuracy while minimizing computational resources.

Download paper here