Update Transformers code example

#6
by sanchit-gandhi HF staff - opened
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -103,22 +103,23 @@ inputs = processor(
103
  audio_values = model.generate(**inputs, max_new_tokens=256)
104
  ```
105
 
106
- 3. Listen to the audio samples either in an ipynb notebook:
107
 
108
  ```python
109
  from IPython.display import Audio
110
 
111
  sampling_rate = model.config.audio_encoder.sampling_rate
112
- Audio(audio_values[0].numpy(), rate=sampling_rate)
113
  ```
114
 
115
- Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
116
 
117
  ```python
118
- import scipy
119
 
120
  sampling_rate = model.config.audio_encoder.sampling_rate
121
- scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
 
122
  ```
123
 
124
  For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
 
103
  audio_values = model.generate(**inputs, max_new_tokens=256)
104
  ```
105
 
106
+ 4. Listen to the audio samples either in an ipynb notebook:
107
 
108
  ```python
109
  from IPython.display import Audio
110
 
111
  sampling_rate = model.config.audio_encoder.sampling_rate
112
+ Audio(audio_values[0].cpu().numpy(), rate=sampling_rate)
113
  ```
114
 
115
+ Or save them as a `.wav` file using a third-party library, e.g. `soundfile`:
116
 
117
  ```python
118
+ import soundfile as sf
119
 
120
  sampling_rate = model.config.audio_encoder.sampling_rate
121
+ audio_values = audio_values.cpu().numpy()
122
+ sf.write("musicgen_out.wav", audio_values[0].T, sampling_rate)
123
  ```
124
 
125
  For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).