ICASSP 2020 ESPnet-TTS Audio Samples

https://arxiv.org/abs/1910.10909

Abstract

This paper introduces a new end-to-end text-to-speech (E2E-TTS) toolkit named ESPnet-TTS, which is an extension of the open-source speech processing toolkit ESPnet. The toolkit supports state-of-the-art E2E-TTS models, including Tacotron 2, Transformer TTS, and FastSpeech, and also provides recipes inspired by the Kaldi automatic speech recognition (ASR) toolkit. The recipes are based on the design unified with the ESPnet ASR recipe, providing high reproducibility. The toolkit also provides pre-trained models and samples of all of the recipes so that users can use it as a baseline. Furthermore, the unified design enables the integration of ASR functions with TTS, e.g., ASR-based objective evaluation and semi-supervised learning with both ASR and TTS models. This paper describes the design of the toolkit and experimental evaluation in comparison with other toolkits. The experimental results show that our best model outperforms other toolkits, resulting in a mean opinion score (MOS) of 4.25 on the LJSpeech dataset. The toolkit is available on GitHub.

Audio samples (English)

These samples are used in the subjective evaluation.
You can find all of the samples in our Google Drive.

Model list

  • Groundtruth: Natural speech
  • Tacotron2.v2: Tacotron 2 using Forward attention with a transition agent
  • Tacotron2.v3: Tacotron 2 with a guided attention loss
  • Transformer.v1: Transformer with a guided attention loss
  • Transformer.v3: Transformer with a guided attention loss and phoneme inputs
  • FastSpeech.v2: FastSpeech trained with Transformer.v1 as a teacher
  • FastSpeech.v3: FastSpeech trained with Transformer.v3 as a teacher
  • FastSpeech.v4: FastSpeech trained with Transformer.v3 + knowledge distillation (EXPERIMENTAL)
  • Merlin: Conventional statistical speech synthesis system based on Merlin + WORLD
  • Mozilla: Pre-trained Tacotron 2 and WaveRNN provided by mozilla/TTS
  • Merlin: Pre-trained Tacotron 2 and WaveGlow provided by NVIDIA/tacotron2

The Commission also recommends

Groundtruth Tacotron2.v2 Tacotron2.v3
Transformer.v1 Transformer.v3
FastSpeech.v2 FastSpeech.v3 FastSpeech.v4
Merlin Mozilla NVIDIA

As a result of these studies, the planning document submitted by the Secretary of the Treasury to the Bureau of the Budget on August thirty-one,

Groundtruth Tacotron2.v2 Tacotron2.v3
Transformer.v1 Transformer.v3
FastSpeech.v2 FastSpeech.v3 FastSpeech.v4
Merlin Mozilla NVIDIA

The FBI now transmits information on all defectors, a category which would, of course, have included Oswald.

Groundtruth Tacotron2.v2 Tacotron2.v3
Transformer.v1 Transformer.v3
FastSpeech.v2 FastSpeech.v3 FastSpeech.v4
Merlin Mozilla NVIDIA

they seem unduly restrictive in continuing to require some manifestation of animus against a Government official.

Groundtruth Tacotron2.v2 Tacotron2.v3
Transformer.v1 Transformer.v3
FastSpeech.v2 FastSpeech.v3 FastSpeech.v4
Merlin Mozilla NVIDIA

and each agency given clear understanding of the assistance which the Secret Service expects.

Groundtruth Tacotron2.v2 Tacotron2.v3
Transformer.v1 Transformer.v3
FastSpeech.v2 FastSpeech.v3 FastSpeech.v4
Merlin Mozilla NVIDIA

Audio samples (Mandarin)

Mandarin samples are created by egs/csmsc/tts1 recipe.
You can find more samples in our Google Drive.

昨日,这名“伤者”与医生全部被警方依法刑事拘留。

Groundtruth Transformer.v1 FastSpeech.v3

钱伟长想到上海来办学校是经过深思熟虑的。

Groundtruth Transformer.v1 FastSpeech.v3

她见我一进门就骂,吃饭时也骂,骂得我抬不起头。

Groundtruth Transformer.v1 FastSpeech.v3

Audio samples (Japanese)

Japanese samples are created by egs/jsut/tts1 recipe.
You can find more samples in our Google Drive.

水をマレーシアから買わなくてはならないのです。

Groundtruth Tacotron2 Transformer

木曜日、停戦会談は、何の進展もないまま終了しました。

Groundtruth Tacotron2 Transformer

上院議員は私がデータをゆがめたと告発した。

Groundtruth Tacotron2 Transformer

Citation

@misc{hayashi2019espnettts,
    title={ESPnet-TTS: Unified, Reproducible, and Integratable Open Source End-to-End Text-to-Speech Toolkit},
    author={Tomoki Hayashi and Ryuichi Yamamoto and Katsuki Inoue and Takenori Yoshimura and Shinji Watanabe and Tomoki Toda and Kazuya Takeda and Yu Zhang and Xu Tan},
    year={2019},
    eprint={1910.10909},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}