Mirco commited on
Commit
2dd226b
1 Parent(s): 58669eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -45,6 +45,26 @@ mel_specs = torch.rand(2, 80,298)
45
  waveforms = hifi_gan.decode_batch(mel_specs)
46
  ```
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  ### Inference on GPU
49
  To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
50
 
 
45
  waveforms = hifi_gan.decode_batch(mel_specs)
46
  ```
47
 
48
+ ### Using the Vocoder with the TTS
49
+ ```python
50
+ import torchaudio
51
+ from speechbrain.pretrained import Tacotron2
52
+ from speechbrain.pretrained import HIFIGAN
53
+
54
+ # Intialize TTS (tacotron2) and Vocoder (HiFIGAN)
55
+ tacotron2 = Tacotron2.from_hparams(source="speechbrain/tts-tacotron2-ljspeech", savedir="tmpdir_tts")
56
+ hifi_gan = HIFIGAN.from_hparams(source="tts-hifigan-libritts-22050Hz", savedir="tmpdir_vocoder")
57
+
58
+ # Running the TTS
59
+ mel_output, mel_length, alignment = tacotron2.encode_text("Mary had a little lamb")
60
+
61
+ # Running Vocoder (spectrogram-to-waveform)
62
+ waveforms = hifi_gan.decode_batch(mel_output)
63
+
64
+ # Save the waverform
65
+ torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050)
66
+ ```
67
+
68
  ### Inference on GPU
69
  To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
70