Update README.md
Browse files
README.md
CHANGED
@@ -56,6 +56,7 @@ llama-server --hf-repo hellork/calme-2.1-qwen2-7b-IQ4_NL-GGUF --hf-file calme-2.
|
|
56 |
Interact with this model by speaking to it. Lean, fast, & private, networked speech to text, AI images, multi-modal voice chat, control apps, webcam, and sound with less than 4GiB of VRAM.
|
57 |
[whisper_dictation](https://github.com/themanyone/whisper_dictation)
|
58 |
|
|
|
59 |
```bash
|
60 |
git clone -b main --single-branch https://github.com/themanyone/whisper_dictation.git
|
61 |
pip install -r whisper_dictation/requirements.txt
|
@@ -64,10 +65,15 @@ git clone https://github.com/ggerganov/whisper.cpp
|
|
64 |
cd whisper.cpp
|
65 |
GGML_CUDA=1 make -j # assuming CUDA is available. see docs
|
66 |
ln -s server ~/.local/bin/whisper_cpp_server # (just put it somewhere in $PATH)
|
67 |
-
|
68 |
whisper_cpp_server -l en -m models/ggml-tiny.en.bin --port 7777
|
|
|
|
|
|
|
|
|
|
|
69 |
./whisper_cpp_client.py
|
70 |
```
|
|
|
71 |
See [the docs](https://github.com/themanyone/whisper_dictation) for tips on enabling the computer to talk back, draw AI images, carry out voice commands, and other features.
|
72 |
|
73 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
|
|
56 |
Interact with this model by speaking to it. Lean, fast, & private, networked speech to text, AI images, multi-modal voice chat, control apps, webcam, and sound with less than 4GiB of VRAM.
|
57 |
[whisper_dictation](https://github.com/themanyone/whisper_dictation)
|
58 |
|
59 |
+
*Quick start*
|
60 |
```bash
|
61 |
git clone -b main --single-branch https://github.com/themanyone/whisper_dictation.git
|
62 |
pip install -r whisper_dictation/requirements.txt
|
|
|
65 |
cd whisper.cpp
|
66 |
GGML_CUDA=1 make -j # assuming CUDA is available. see docs
|
67 |
ln -s server ~/.local/bin/whisper_cpp_server # (just put it somewhere in $PATH)
|
|
|
68 |
whisper_cpp_server -l en -m models/ggml-tiny.en.bin --port 7777
|
69 |
+
|
70 |
+
# -ngl option assumes CUDA is available. see docs
|
71 |
+
llama-server --hf-repo hellork/calme-2.1-qwen2-7b-IQ4_NL-GGUF --hf-file calme-2.1-qwen2-7b-iq4_nl-imat.gguf -c 2048 -ngl 17 --port 8888
|
72 |
+
|
73 |
+
cd whisper_dictation
|
74 |
./whisper_cpp_client.py
|
75 |
```
|
76 |
+
|
77 |
See [the docs](https://github.com/themanyone/whisper_dictation) for tips on enabling the computer to talk back, draw AI images, carry out voice commands, and other features.
|
78 |
|
79 |
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|