Update README.md (#8)
Browse files- Update README.md (a087c76b076f44db0fa2c2e56b4a30ad3b67f1ab)
Co-authored-by: Vaibhav Srivastav <[email protected]>
README.md
CHANGED
@@ -67,22 +67,32 @@ Try out Bark yourself!
|
|
67 |
</a>
|
68 |
|
69 |
|
70 |
-
## 🤗 Transformers Usage
|
71 |
-
|
72 |
-
|
73 |
You can run Bark locally with the 🤗 Transformers library from version 4.31.0 onwards.
|
74 |
|
75 |
-
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers)
|
76 |
|
77 |
```
|
78 |
-
pip install
|
|
|
79 |
```
|
80 |
|
81 |
-
2. Run the
|
82 |
|
83 |
```python
|
84 |
-
from transformers import
|
|
|
|
|
|
|
|
|
|
|
85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
|
87 |
processor = AutoProcessor.from_pretrained("suno/bark-small")
|
88 |
model = AutoModel.from_pretrained("suno/bark-small")
|
@@ -95,7 +105,7 @@ inputs = processor(
|
|
95 |
speech_values = model.generate(**inputs, do_sample=True)
|
96 |
```
|
97 |
|
98 |
-
|
99 |
|
100 |
```python
|
101 |
from IPython.display import Audio
|
@@ -109,7 +119,7 @@ Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
|
|
109 |
```python
|
110 |
import scipy
|
111 |
|
112 |
-
sampling_rate = model.
|
113 |
scipy.io.wavfile.write("bark_out.wav", rate=sampling_rate, data=speech_values.cpu().numpy().squeeze())
|
114 |
```
|
115 |
|
|
|
67 |
</a>
|
68 |
|
69 |
|
|
|
|
|
|
|
70 |
You can run Bark locally with the 🤗 Transformers library from version 4.31.0 onwards.
|
71 |
|
72 |
+
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) and scipy:
|
73 |
|
74 |
```
|
75 |
+
pip install --upgrade pip
|
76 |
+
pip install --upgrade transformers scipy
|
77 |
```
|
78 |
|
79 |
+
2. Run inference via the `Text-to-Speech` (TTS) pipeline. You can infer the bark model via the TTS pipeline in just a few lines of code!
|
80 |
|
81 |
```python
|
82 |
+
from transformers import pipeline
|
83 |
+
import scipy
|
84 |
+
|
85 |
+
synthesiser = pipeline("text-to-speech", "suno/bark-small")
|
86 |
+
|
87 |
+
speech = pipe("Hello, my dog is cooler than you!", forward_params={"do_sample": True})
|
88 |
|
89 |
+
scipy.io.wavfile.write("bark_out.wav", rate=speech["sampling_rate"], data=speech["audio"])
|
90 |
+
```
|
91 |
+
|
92 |
+
3. Run inference via the Transformers modelling code. You can use the processor + generate code to convert text into a mono 24 kHz speech waveform for more fine-grained control.
|
93 |
+
|
94 |
+
```python
|
95 |
+
from transformers import AutoProcessor, AutoModel
|
96 |
|
97 |
processor = AutoProcessor.from_pretrained("suno/bark-small")
|
98 |
model = AutoModel.from_pretrained("suno/bark-small")
|
|
|
105 |
speech_values = model.generate(**inputs, do_sample=True)
|
106 |
```
|
107 |
|
108 |
+
4. Listen to the speech samples either in an ipynb notebook:
|
109 |
|
110 |
```python
|
111 |
from IPython.display import Audio
|
|
|
119 |
```python
|
120 |
import scipy
|
121 |
|
122 |
+
sampling_rate = model.config.sample_rate
|
123 |
scipy.io.wavfile.write("bark_out.wav", rate=sampling_rate, data=speech_values.cpu().numpy().squeeze())
|
124 |
```
|
125 |
|