TY - JOUR
T1 - StyleTTS
T2 - A Style-Based Generative Model for Natural and Diverse Text-to-Speech Synthesis
AU - Li, Yinghao Aaron
AU - Han, Cong
AU - Mesgarani, Nima
N1 - Publisher Copyright:
© 2007-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Text-to-Speech (TTS) has recently seen great progress in synthesizing high-quality speech owing to the rapid development of parallel TTS systems. Yet producing speech with naturalistic prosodic variations, speaking styles, and emotional tones remains challenging. In addition, many existing parallel TTS models often struggle with identifying optimal monotonic alignments since speech and duration generation typically occur independently. Here, we propose StyleTTS, a style-based generative model for parallel TTS that can synthesize diverse speech with natural prosody from a reference speech utterance. Using our novel Transferable Monotonic Aligner (TMA) and duration-invariant data augmentation, StyleTTS significantly outperforms other baseline models on both single and multi-speaker datasets in subjective tests of speech naturalness and synthesized speaker similarity. It also demonstrates higher robustness and emotional similarity to the reference speech as indicated by word error rate (WER) and acoustic feature correlations. Through self-supervised learning, StyleTTS can generate speech with the same emotional and prosodic tone as the reference speech without needing explicit labels for these categories. In addition, when trained with a large number of speakers, our model can perform zero-shot speaker adaption. The source code and audio samples can be found on our demo page https://styletts.github.io/.
AB - Text-to-Speech (TTS) has recently seen great progress in synthesizing high-quality speech owing to the rapid development of parallel TTS systems. Yet producing speech with naturalistic prosodic variations, speaking styles, and emotional tones remains challenging. In addition, many existing parallel TTS models often struggle with identifying optimal monotonic alignments since speech and duration generation typically occur independently. Here, we propose StyleTTS, a style-based generative model for parallel TTS that can synthesize diverse speech with natural prosody from a reference speech utterance. Using our novel Transferable Monotonic Aligner (TMA) and duration-invariant data augmentation, StyleTTS significantly outperforms other baseline models on both single and multi-speaker datasets in subjective tests of speech naturalness and synthesized speaker similarity. It also demonstrates higher robustness and emotional similarity to the reference speech as indicated by word error rate (WER) and acoustic feature correlations. Through self-supervised learning, StyleTTS can generate speech with the same emotional and prosodic tone as the reference speech without needing explicit labels for these categories. In addition, when trained with a large number of speakers, our model can perform zero-shot speaker adaption. The source code and audio samples can be found on our demo page https://styletts.github.io/.
UR - http://www.scopus.com/inward/record.url?scp=85216084122&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85216084122&partnerID=8YFLogxK
U2 - 10.1109/JSTSP.2025.3530171
DO - 10.1109/JSTSP.2025.3530171
M3 - Article
AN - SCOPUS:85216084122
SN - 1932-4553
JO - IEEE Journal on Selected Topics in Signal Processing
JF - IEEE Journal on Selected Topics in Signal Processing
ER -