IWSLT.2022 - Papers

Total: 35

#1 SubER - A Metric for Automatic Evaluation of Subtitle Quality [PDF] [Copy] [Kimi1]

Authors: Patrick Wilken ; Panayota Georgakopoulou ; Evgeny Matusov

This paper addresses the problem of evaluating the quality of automatically generated subtitles, which includes not only the quality of the machine-transcribed or translated speech, but also the quality of line segmentation and subtitle timing. We propose SubER - a single novel metric based on edit distance with shifts that takes all of these subtitle properties into account. We compare it to existing metrics for evaluating transcription, translation, and subtitle quality. A careful human evaluation in a post-editing scenario shows that the new metric has a high correlation with the post-editing effort and direct human assessment scores, outperforming baseline metrics considering only the subtitle text, such as WER and BLEU, and existing methods to integrate segmentation and timing features.

#2 Improving Arabic Diacritization by Learning to Diacritize and Translate [PDF] [Copy] [Kimi1]

Authors: Brian Thompson ; Ali Alshehri

We propose a novel multitask learning method for diacritization which trains a model to both diacritize and translate. Our method addresses data sparsity by exploiting large, readily available bitext corpora. Furthermore, translation requires implicit linguistic and semantic knowledge, which is helpful for resolving ambiguities in diacritization. We apply our method to the Penn Arabic Treebank and report a new state-of-the-art word error rate of 4.79%. We also conduct manual and automatic analysis to better understand our method and highlight some of the remaining challenges in diacritization. Our method has applications in text-to-speech, speech-to-speech translation, and other NLP tasks.

#3 Simultaneous Neural Machine Translation with Prefix Alignment [PDF] [Copy] [Kimi1]

Authors: Yasumasa Kano ; Katsuhito Sudoh ; Satoshi Nakamura

Simultaneous translation is a task that requires starting translation before the speaker has finished speaking, so we face a trade-off between latency and accuracy. In this work, we focus on prefix-to-prefix translation and propose a method to extract alignment between bilingual prefix pairs. We use the alignment to segment a streaming input and fine-tune a translation model. The proposed method demonstrated higher BLEU than those of baselines in low latency ranges in our experiments on the IWSLT simultaneous translation benchmark.

#4 Locality-Sensitive Hashing for Long Context Neural Machine Translation [PDF] [Copy] [Kimi1]

Authors: Frithjof Petrick ; Jan Rosendahl ; Christian Herold ; Hermann Ney

After its introduction the Transformer architecture quickly became the gold standard for the task of neural machine translation. A major advantage of the Transformer compared to previous architectures is the faster training speed achieved by complete parallelization across timesteps due to the use of attention over recurrent layers. However, this also leads to one of the biggest problems of the Transformer, namely the quadratic time and memory complexity with respect to the input length. In this work we adapt the locality-sensitive hashing approach of Kitaev et al. (2020) to self-attention in the Transformer, we extended it to cross-attention and apply this memory efficient framework to sentence- and document-level machine translation. Our experiments show that the LSH attention scheme for sentence-level comes at the cost of slightly reduced translation quality. For document-level NMT we are able to include much bigger context sizes than what is possible with the baseline Transformer. However, more context does neither improve translation quality nor improve scores on targeted test suites.

#5 Anticipation-Free Training for Simultaneous Machine Translation [PDF] [Copy] [Kimi1]

Authors: Chih-Chiang Chang ; Shun-Po Chuang ; Hung-yi Lee

Simultaneous machine translation (SimulMT) speeds up the translation process by starting to translate before the source sentence is completely available. It is difficult due to limited context and word order difference between languages. Existing methods increase latency or introduce adaptive read-write policies for SimulMT models to handle local reordering and improve translation quality. However, the long-distance reordering would make the SimulMT models learn translation mistakenly. Specifically, the model may be forced to predict target tokens when the corresponding source tokens have not been read. This leads to aggressive anticipation during inference, resulting in the hallucination phenomenon. To mitigate this problem, we propose a new framework that decompose the translation process into the monotonic translation step and the reordering step, and we model the latter by the auxiliary sorting network (ASN). The ASN rearranges the hidden states to match the order in the target language, so that the SimulMT model could learn to translate more reasonably. The entire model is optimized end-to-end and does not rely on external aligners or data. During inference, ASN is removed to achieve streaming. Experiments show the proposed framework could outperform previous methods with less latency.

#6 Who Are We Talking About? Handling Person Names in Speech Translation [PDF] [Copy] [Kimi1]

Authors: Marco Gaido ; Matteo Negri ; Marco Turchi

Recent work has shown that systems for speech translation (ST) – similarly to automatic speech recognition (ASR) – poorly handle person names. This shortcoming does not only lead to errors that can seriously distort the meaning of the input, but also hinders the adoption of such systems in application scenarios (like computer-assisted interpreting) where the translation of named entities, like person names, is crucial. In this paper, we first analyse the outputs of ASR/ST systems to identify the reasons of failures in person name transcription/translation. Besides the frequency in the training data, we pinpoint the nationality of the referred person as a key factor. We then mitigate the problem by creating multilingual models, and further improve our ST systems by forcing them to jointly generate transcripts and translations, prioritising the former over the latter. Overall, our solutions result in a relative improvement in token-level person name accuracy by 47.8% on average for three language pairs (en->es,fr,it).

#7 Joint Generation of Captions and Subtitles with Dual Decoding [PDF] [Copy] [Kimi1]

Authors: Jitao Xu ; François Buet ; Josep Crego ; Elise Bertin-Lemée ; François Yvon

As the amount of audio-visual content increases, the need to develop automatic captioning and subtitling solutions to match the expectations of a growing international audience appears as the only viable way to boost throughput and lower the related post-production costs. Automatic captioning and subtitling often need to be tightly intertwined to achieve an appropriate level of consistency and synchronization with each other and with the video signal. In this work, we assess a dual decoding scheme to achieve a strong coupling between these two tasks and show how adequacy and consistency are increased, with virtually no additional cost in terms of model size and training complexity.

#8 MirrorAlign: A Super Lightweight Unsupervised Word Alignment Model via Cross-Lingual Contrastive Learning [PDF] [Copy] [Kimi1]

Authors: Di Wu ; Liang Ding ; Shuo Yang ; Mingyang Li

Word alignment is essential for the downstream cross-lingual language understanding and generation tasks. Recently, the performance of the neural word alignment models has exceeded that of statistical models. However, they heavily rely on sophisticated translation models. In this study, we propose a super lightweight unsupervised word alignment model named MirrorAlign, in which bidirectional symmetric attention trained with a contrastive learning objective is introduced, and an agreement loss is employed to bind the attention maps, such that the alignments follow mirror-like symmetry hypothesis. Experimental results on several public benchmarks demonstrate that our model achieves competitive, if not better, performance compared to the state of the art in word alignment while significantly reducing the training and decoding time on average. Further ablation analysis and case studies show the superiority of our proposed MirrorAlign. Notably, we recognize our model as a pioneer attempt to unify bilingual word embedding and word alignments. Encouragingly, our approach achieves 16.4X speedup against GIZA++, and 50X parameter compression compared with the Transformer-based alignment methods. We release our code to facilitate the community: https://github.com/moore3930/MirrorAlign.

#9 On the Impact of Noises in Crowd-Sourced Data for Speech Translation [PDF] [Copy] [Kimi1]

Authors: Siqi Ouyang ; Rong Ye ; Lei Li

Training speech translation (ST) models requires large and high-quality datasets. MuST-C is one of the most widely used ST benchmark datasets. It contains around 400 hours of speech-transcript-translation data for each of the eight translation directions. This dataset passes several quality-control filters during creation. However, we find that MuST-C still suffers from three major quality issues: audiotext misalignment, inaccurate translation, and unnecessary speaker’s name. What are the impacts of these data quality issues for model development and evaluation? In this paper, we propose an automatic method to fix or filter the above quality issues, using English-German (En-De) translation as an example. Our experiments show that ST models perform better on clean test sets, and the rank of proposed models remains consistent across different test sets. Besides, simply removing misaligned data points from the training set does not lead to a better ST model.

#10 Findings of the IWSLT 2022 Evaluation Campaign [PDF] [Copy] [Kimi]

Authors: Antonios Anastasopoulos ; Loïc Barrault ; Luisa Bentivogli ; Marcely Zanon Boito ; Ondřej Bojar ; Roldano Cattoni ; Anna Currey ; Georgiana Dinu ; Kevin Duh ; Maha Elbayad ; Clara Emmanuel ; Yannick Estève ; Marcello Federico ; Christian Federmann ; Souhir Gahbiche ; Hongyu Gong ; Roman Grundkiewicz ; Barry Haddow ; Benjamin Hsu ; Dávid Javorský ; Vĕra Kloudová ; Surafel Lakew ; Xutai Ma ; Prashant Mathur ; Paul McNamee ; Kenton Murray ; Maria Nǎdejde ; Satoshi Nakamura ; Matteo Negri ; Jan Niehues ; Xing Niu ; John Ortega ; Juan Pino ; Elizabeth Salesky ; Jiatong Shi ; Matthias Sperber ; Sebastian Stüker ; Katsuhito Sudoh ; Marco Turchi ; Yogesh Virkar ; Alexander Waibel ; Changhan Wang ; Shinji Watanabe

The evaluation campaign of the 19th International Conference on Spoken Language Translation featured eight shared tasks: (i) Simultaneous speech translation, (ii) Offline speech translation, (iii) Speech to speech translation, (iv) Low-resource speech translation, (v) Multilingual speech translation, (vi) Dialect speech translation, (vii) Formality control for speech translation, (viii) Isometric speech translation. A total of 27 teams participated in at least one of the shared tasks. This paper details, for each shared task, the purpose of the task, the data that were released, the evaluation metrics that were applied, the submissions that were received and the results that were achieved.

#11 The YiTrans Speech Translation System for IWSLT 2022 Offline Shared Task [PDF] [Copy] [Kimi1]

Authors: Ziqiang Zhang ; Junyi Ao

This paper describes the submission of our end-to-end YiTrans speech translation system for the IWSLT 2022 offline task, which translates from English audio to German, Chinese, and Japanese. The YiTrans system is built on large-scale pre-trained encoder-decoder models. More specifically, we first design a multi-stage pre-training strategy to build a multi-modality model with a large amount of labeled and unlabeled data. We then fine-tune the corresponding components of the model for the downstream speech translation tasks. Moreover, we make various efforts to improve performance, such as data filtering, data augmentation, speech segmentation, model ensemble, and so on. Experimental results show that our YiTrans system obtains a significant improvement than the strong baseline on three translation directions, and it achieves +5.2 BLEU improvements over last year’s optimal end-to-end system on tst2021 English-German.

#12 Amazon Alexa AI’s System for IWSLT 2022 Offline Speech Translation Shared Task [PDF] [Copy] [Kimi1]

Authors: Akshaya Shanbhogue ; Ran Xue ; Ching-Yun Chang ; Sarah Campbell

This paper describes Amazon Alexa AI’s submission to the IWSLT 2022 Offline Speech Translation Task. Our system is an end-to-end speech translation model that leverages pretrained models and cross modality transfer learning. We detail two improvements to the knowledge transfer schema. First, we implemented a new loss function that reduces knowledge gap between audio and text modalities in translation task effectively. Second, we investigate multiple finetuning strategies including sampling loss, language grouping and domain adaption. These strategies aims to bridge the gaps between speech and text translation tasks. We also implement a multi-stage segmentation and merging strategy that yields improvements on the unsegmented development datasets. Results show that the proposed loss function consistently improves BLEU scores on the development datasets for both English-German and multilingual models. Additionally, certain language pairs see BLEU score improvements with specific finetuning strategies.

#13 Efficient yet Competitive Speech Translation: FBK@IWSLT2022 [PDF] [Copy] [Kimi1]

Authors: Marco Gaido ; Sara Papi ; Dennis Fucci ; Giuseppe Fiameni ; Matteo Negri ; Marco Turchi

The primary goal of this FBK’s systems submission to the IWSLT 2022 offline and simultaneous speech translation tasks is to reduce model training costs without sacrificing translation quality. As such, we first question the need of ASR pre-training, showing that it is not essential to achieve competitive results. Second, we focus on data filtering, showing that a simple method that looks at the ratio between source and target characters yields a quality improvement of 1 BLEU. Third, we compare different methods to reduce the detrimental effect of the audio segmentation mismatch between training data manually segmented at sentence level and inference data that is automatically segmented. Towards the same goal of training cost reduction, we participate in the simultaneous task with the same model trained for offline ST. The effectiveness of our lightweight training strategy is shown by the high score obtained on the MuST-C en-de corpus (26.7 BLEU) and is confirmed in high-resource data conditions by a 1.6 BLEU improvement on the IWSLT2020 test set over last year’s winning system.

#14 Effective combination of pretrained models - KIT@IWSLT2022 [PDF] [Copy] [Kimi1]

Authors: Ngoc-Quan Pham ; Tuan Nam Nguyen ; Thai-Binh Nguyen ; Danni Liu ; Carlos Mullov ; Jan Niehues ; Alexander Waibel

Pretrained models in acoustic and textual modalities can potentially improve speech translation for both Cascade and End-to-end approaches. In this evaluation, we aim at empirically looking for the answer by using the wav2vec, mBART50 and DeltaLM models to improve text and speech translation models. The experiments showed that the presence of these models together with an advanced audio segmentation method results in an improvement over the previous end-to-end system by up to 7 BLEU points. More importantly, the experiments showed that given enough data and modeling capacity to overcome the training difficulty, we can outperform even very competitive Cascade systems. In our experiments, this gap can be as large as 2.0 BLEU points, the same gap that the Cascade often led over the years.

#15 The USTC-NELSLIP Offline Speech Translation Systems for IWSLT 2022 [PDF] [Copy] [Kimi1]

Authors: Weitai Zhang ; Zhongyi Ye ; Haitao Tang ; Xiaoxi Li ; Xinyuan Zhou ; Jing Yang ; Jianwei Cui ; Pan Deng ; Mohan Shi ; Yifan Song ; Dan Liu ; Junhua Liu ; Lirong Dai

This paper describes USTC-NELSLIP’s submissions to the IWSLT 2022 Offline Speech Translation task, including speech translation of talks from English to German, English to Chinese and English to Japanese. We describe both cascaded architectures and end-to-end models which can directly translate source speech into target text. In the cascaded condition, we investigate the effectiveness of different model architectures with robust training and achieve 2.72 BLEU improvements over last year’s optimal system on MuST-C English-German test set. In the end-to-end condition, we build models based on Transformer and Conformer architectures, achieving 2.26 BLEU improvements over last year’s optimal end-to-end system. The end-to-end system has obtained promising results, but it is still lagging behind our cascaded models.

#16 The AISP-SJTU Simultaneous Translation System for IWSLT 2022 [PDF] [Copy] [Kimi1]

Authors: Qinpei Zhu ; Renshou Wu ; Guangfeng Liu ; Xinyu Zhu ; Xingyu Chen ; Yang Zhou ; Qingliang Miao ; Rui Wang ; Kai Yu

This paper describes AISP-SJTU’s submissions for the IWSLT 2022 Simultaneous Translation task. We participate in the text-to-text and speech-to-text simultaneous translation from English to Mandarin Chinese. The training of the CAAT is improved by training across multiple values of right context window size, which achieves good online performance without setting a prior right context window size for training. For speech-to-text task, the best model we submitted achieves 25.87, 26.21, 26.45 BLEU in low, medium and high regimes on tst-COMMON, corresponding to 27.94, 28.31, 28.43 BLEU in text-to-text task.

#17 The Xiaomi Text-to-Text Simultaneous Speech Translation System for IWSLT 2022 [PDF] [Copy] [Kimi1]

Authors: Bao Guo ; Mengge Liu ; Wen Zhang ; Hexuan Chen ; Chang Mu ; Xiang Li ; Jianwei Cui ; Bin Wang ; Yuhang Guo

This system paper describes the Xiaomi Translation System for the IWSLT 2022 Simultaneous Speech Translation (noted as SST) shared task. We participate in the English-to-Mandarin Chinese Text-to-Text (noted as T2T) track. Our system is built based on the Transformer model with novel techniques borrowed from our recent research work. For the data filtering, language-model-based and rule-based methods are conducted to filter the data to obtain high-quality bilingual parallel corpora. We also strengthen our system with some dominating techniques related to data augmentation, such as knowledge distillation, tagged back-translation, and iterative back-translation. We also incorporate novel training techniques such as R-drop, deep model, and large batch training which have been shown to be beneficial to the naive Transformer model. In the SST scenario, several variations of extttwait-k strategies are explored. Furthermore, in terms of robustness, both data-based and model-based ways are used to reduce the sensitivity of our system to Automatic Speech Recognition (ASR) outputs. We finally design some inference algorithms and use the adaptive-ensemble method based on multiple model variants to further improve the performance of the system. Compared with strong baselines, fusing all techniques can improve our system by 2 extasciitilde3 BLEU scores under different latency regimes.

#18 NVIDIA NeMo Offline Speech Translation Systems for IWSLT 2022 [PDF] [Copy] [Kimi1]

Authors: Oleksii Hrinchuk ; Vahid Noroozi ; Abhinav Khattar ; Anton Peganov ; Sandeep Subramanian ; Somshubra Majumdar ; Oleksii Kuchaiev

This paper provides an overview of NVIDIA NeMo’s speech translation systems for the IWSLT 2022 Offline Speech Translation Task. Our cascade system consists of 1) Conformer RNN-T automatic speech recognition model, 2) punctuation-capitalization model based on pre-trained T5 encoder, 3) ensemble of Transformer neural machine translation models fine-tuned on TED talks. Our end-to-end model has less parameters and consists of Conformer encoder and Transformer decoder. It relies on the cascade system by re-using its pre-trained ASR encoder and training on synthetic translations generated with the ensemble of NMT models. Our En->De cascade and end-to-end systems achieve 29.7 and 26.2 BLEU on the 2020 test set correspondingly, both outperforming the previous year’s best of 26 BLEU.

#19 The NiuTrans’s Submission to the IWSLT22 English-to-Chinese Offline Speech Translation Task [PDF] [Copy] [Kimi1]

Authors: Yuhao Zhang ; Canan Huang ; Chen Xu ; Xiaoqian Liu ; Bei Li ; Anxiang Ma ; Tong Xiao ; Jingbo Zhu

This paper describes NiuTrans’s submission to the IWSLT22 English-to-Chinese (En-Zh) offline speech translation task. The end-to-end and bilingual system is built by constrained English and Chinese data and translates the English speech to Chinese text without intermediate transcription. Our speech translation models are composed of different pre-trained acoustic models and machine translation models by two kinds of adapters. We compared the effect of the standard speech feature (e.g. log Mel-filterbank) and the pre-training speech feature and try to make them interact. The final submission is an ensemble of three potential speech translation models. Our single best and ensemble model achieves 18.66 BLEU and 19.35 BLEU separately on MuST-C En-Zh tst-COMMON set.

#20 The HW-TSC’s Offline Speech Translation System for IWSLT 2022 Evaluation [PDF] [Copy] [Kimi1]

Authors: Yinglu Li ; Minghan Wang ; Jiaxin Guo ; Xiaosong Qiao ; Yuxia Wang ; Daimeng Wei ; Chang Su ; Yimeng Chen ; Min Zhang ; Shimin Tao ; Hao Yang ; Ying Qin

This paper describes the HW-TSC’s designation of the Offline Speech Translation System submitted for IWSLT 2022 Evaluation. We explored both cascade and end-to-end system on three language tracks (en-de, en-zh and en-ja), and we chose the cascade one as our primary submission. For the automatic speech recognition (ASR) model of cascade system, there are three ASR models including Conformer, S2T-Transformer and U2 trained on the mixture of five datasets. During inference, transcripts are generated with the help of domain controlled generation strategy. Context-aware reranking and ensemble based anti-interference strategy are proposed to produce better ASR outputs. For machine translation part, we pretrained three translation models on WMT21 dataset and fine-tuned them on in-domain corpora. Our cascade system shows competitive performance than the known offline systems in the industry and academia.

#21 The HW-TSC’s Simultaneous Speech Translation System for IWSLT 2022 Evaluation [PDF] [Copy] [Kimi1]

Authors: Minghan Wang ; Jiaxin Guo ; Yinglu Li ; Xiaosong Qiao ; Yuxia Wang ; Zongyao Li ; Chang Su ; Yimeng Chen ; Min Zhang ; Shimin Tao ; Hao Yang ; Ying Qin

This paper presents our work in the participation of IWSLT 2022 simultaneous speech translation evaluation. For the track of text-to-text (T2T), we participate in three language pairs and build wait-k based simultaneous MT (SimulMT) model for the task. The model was pretrained on WMT21 news corpora, and was further improved with in-domain fine-tuning and self-training. For the speech-to-text (S2T) track, we designed both cascade and end-to-end form in three language pairs. The cascade system is composed of a chunking-based streaming ASR model and the SimulMT model used in the T2T track. The end-to-end system is a simultaneous speech translation (SimulST) model based on wait-k strategy, which is directly trained on a synthetic corpus produced by translating all texts of ASR corpora into specific target language with an offline MT model. It also contains a heuristic sentence breaking strategy, preventing it from finishing the translation before the the end of the speech. We evaluate our systems on the MUST-C tst-COMMON dataset and show that the end-to-end system is competitive to the cascade one. Meanwhile, we also demonstrate that the SimulMT model can be efficiently optimized by these approaches, resulting in the improvements of 1-2 BLEU points.

#22 MLLP-VRAIN UPV systems for the IWSLT 2022 Simultaneous Speech Translation and Speech-to-Speech Translation tasks [PDF] [Copy] [Kimi1]

Authors: Javier Iranzo-Sánchez ; Javier Jorge Cano ; Alejandro Pérez-González-de-Martos ; Adrián Giménez Pastor ; Gonçal Garcés Díaz-Munío ; Pau Baquero-Arnal ; Joan Albert Silvestre-Cerdà ; Jorge Civera Saiz ; Albert Sanchis ; Alfons Juan

This work describes the participation of the MLLP-VRAIN research group in the two shared tasks of the IWSLT 2022 conference: Simultaneous Speech Translation and Speech-to-Speech Translation. We present our streaming-ready ASR, MT and TTS systems for Speech Translation and Synthesis from English into German. Our submission combines these systems by means of a cascade approach paying special attention to data preparation and decoding for streaming inference.

#23 Pretrained Speech Encoders and Efficient Fine-tuning Methods for Speech Translation: UPC at IWSLT 2022 [PDF] [Copy] [Kimi1]

Authors: Ioannis Tsiamas ; Gerard I. Gállego ; Carlos Escolano ; José Fonollosa ; Marta R. Costa-jussà

This paper describes the submissions of the UPC Machine Translation group to the IWSLT 2022 Offline Speech Translation and Speech-to-Speech Translation tracks. The offline task involves translating English speech to German, Japanese and Chinese text. Our Speech Translation systems are trained end-to-end and are based on large pretrained speech and text models. We use an efficient fine-tuning technique that trains only specific layers of our system, and explore the use of adapter modules for the non-trainable layers. We further investigate the suitability of different speech encoders (wav2vec 2.0, HuBERT) for our models and the impact of knowledge distillation from the Machine Translation model that we use for the decoder (mBART). For segmenting the IWSLT test sets we fine-tune a pretrained audio segmentation model and achieve improvements of 5 BLEU compared to the given segmentation. Our best single model uses HuBERT and parallel adapters and achieves 29.42 BLEU at English-German MuST-C tst-COMMON and 26.77 at IWSLT 2020 test. By ensembling many models, we further increase translation quality to 30.83 BLEU and 27.78 accordingly. Furthermore, our submission for English-Japanese achieves 15.85 and English-Chinese obtains 25.63 BLEU on the MuST-C tst-COMMON sets. Finally, we extend our system to perform English-German Speech-to-Speech Translation with a pretrained Text-to-Speech model.

#24 CUNI-KIT System for Simultaneous Speech Translation Task at IWSLT 2022 [PDF] [Copy] [Kimi1]

Authors: Peter Polák ; Ngoc-Quan Pham ; Tuan Nam Nguyen ; Danni Liu ; Carlos Mullov ; Jan Niehues ; Ondřej Bojar ; Alexander Waibel

In this paper, we describe our submission to the Simultaneous Speech Translation at IWSLT 2022. We explore strategies to utilize an offline model in a simultaneous setting without the need to modify the original model. In our experiments, we show that our onlinization algorithm is almost on par with the offline setting while being 3x faster than offline in terms of latency on the test set. We also show that the onlinized offline model outperforms the best IWSLT2021 simultaneous system in medium and high latency regimes and is almost on par in the low latency regime. We make our system publicly available.

#25 NAIST Simultaneous Speech-to-Text Translation System for IWSLT 2022 [PDF] [Copy] [Kimi1]

Authors: Ryo Fukuda ; Yuka Ko ; Yasumasa Kano ; Kosuke Doi ; Hirotaka Tokuyama ; Sakriani Sakti ; Katsuhito Sudoh ; Satoshi Nakamura

This paper describes NAIST’s simultaneous speech translation systems developed for IWSLT 2022 Evaluation Campaign. We participated the speech-to-speech track for English-to-German and English-to-Japanese. Our primary submissions were end-to-end systems using adaptive segmentation policies based on Prefix Alignment.