Towards Robust FastSpeech 2 by Modelling Residual Multimodality

Fabian Kögel, Bac Nguyen, Fabien Cardinaux - Sony Europe B.V., Stuttgart Laboratory 1, Germany - Interspeech 2023

Abstract: State-of-the-art non-autoregressive text-to-speech (TTS) models based on FastSpeech 2 can efficiently synthesise high-fidelity and natural speech. For expressive speech datasets however, we observe characteristic audio distortions. We demonstrate that such artefacts are introduced to the vocoder reconstruction by over-smooth mel-spectrogram predictions, which are induced by the choice of mean-squared-error (MSE) loss for training the mel-spectrogram decoder. With MSE loss FastSpeech 2 is limited to learn conditional averages of the training distribution, which might not lie close to a natural sample if the distribution still appears multimodal after all conditioning signals. To alleviate this problem, we introduce TVC-GMM, a mixture model of Trivariate-Chain Gaussian distributions, to model the residual multimodality. TVC-GMM reduces spectrogram smoothness and improves perceptual audio quality in particular for expressive datasets as shown by both objective and subjective evaluation.


Insufficient modelling degrades vocoder reconstruction quality (Section 4.2)

LJSpeech - singlespeaker

Ground-Truth
Griffin-Lim HiFiGAN HiFiGAN (finetuned) MelGAN CARGAN WaveGlow WaveRNN
GT Spectrogram Reconstruction
+ Smooth (metallic artefact)
+ Sharpen (bubbling artefact)

VCTK - multispeaker

Ground-Truth
Griffin-Lim HiFiGAN HiFiGAN (finetuned) MelGAN CARGAN WaveGlow WaveRNN
GT Spectrogram Reconstruction
+ Smooth (metallic artefact)
+ Sharpen (bubbling artefact)

LibriTTS - multispeaker

Ground-Truth
Griffin-Lim HiFiGAN HiFiGAN (finetuned) MelGAN CARGAN WaveGlow WaveRNN
GT Spectrogram Reconstruction
+ Smooth (metallic artefact)
+ Sharpen (bubbling artefact)



Speech synthesis with TVC-GMM improves perceptual audio quality (Section 4.4)

LJSpeech - singlespeaker

Ground-Truth GT Reconstruction (HiFiGAN) FastSpeech 2 TVC-GMM [k=1] naive TVC-GMM [k=5] naive TVC-GMM [k=1] cond. TVC-GMM [k=5] cond.
Sample 1
Sample 2
Sample 3
Sample 4
Sample 5

VCTK - multispeaker

Ground-Truth GT Reconstruction (HiFiGAN) FastSpeech 2 TVC-GMM [k=1] naive TVC-GMM [k=5] naive TVC-GMM [k=1] cond. TVC-GMM [k=5] cond.
Sample 1
Sample 2
Sample 3
Sample 4
Sample 5

LibriTTS - multispeaker

Ground-Truth GT Reconstruction (HiFiGAN) FastSpeech 2 TVC-GMM [k=1] naive TVC-GMM [k=5] naive TVC-GMM [k=1] cond. TVC-GMM [k=5] cond.
Sample 1
Sample 2
Sample 3
Sample 4
Sample 5

References

Ground-Truth audio samples are taken from the respective datasets: [1] [The LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset), K. Ito and L. Johnson 2017 [2] [CSTR VCTK Corpus](https://datashare.ed.ac.uk/handle/10283/3443): English Multi-speaker Corpus for CSTR Voice Cloning Toolkit (version 0.92), Yamagishi et al. 2019 [3] [LibriTTS](https://research.google/tools/datasets/libri-tts): A Corpus Derived from LibriSpeech for Text-to-Speech, Zen et al. 2019