IWLST 23 Code
Winning submission from low-resource ST track
About
NAVER LABS Europe submitted a single multilingual speech translation system that achieved the top performance in the IWSLT 2023 low-resource track for the Tamasheq–French and Quechua–Spanish language pairs. Our model leveraged pre-trained speech and text models, and it was implemented on top of the pasero toolkit.
Link to resources
Citing us
For citing us, please use the bibtex below:
@inproceedings{gow-smith-etal-2023-naver,
title = "{NAVER} {LABS} {E}urope{'}s Multilingual Speech Translation Systems for the {IWSLT} 2023 Low-Resource Track",
author = "Gow-Smith, Edward and
Berard, Alexandre and
Boito, Marcely Zanon and
Calapodescu, Ioan",
editor = "Salesky, Elizabeth and
Federico, Marcello and
Carpuat, Marine",
booktitle = "Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT)",
month = jul,
year = "2023",
address = "Toronto, Canada (in-person and online)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.iwslt-1.10.pdf",
doi = "10.18653/v1/2023.iwslt-1.10",
pages = "144--158"
}