hellork commited on
Commit
dc12466
·
verified ·
1 Parent(s): 6cd4d34

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -1
README.md CHANGED
@@ -51,7 +51,28 @@ llama-cli --hf-repo hellork/calme-2.1-qwen2-7b-Q4_K_M-GGUF --hf-file calme-2.1-q
51
  llama-server --hf-repo hellork/calme-2.1-qwen2-7b-Q4_K_M-GGUF --hf-file calme-2.1-qwen2-7b-q4_k_m.gguf -c 2048
52
  ```
53
 
54
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
  Step 1: Clone llama.cpp from GitHub.
57
  ```
 
51
  llama-server --hf-repo hellork/calme-2.1-qwen2-7b-Q4_K_M-GGUF --hf-file calme-2.1-qwen2-7b-q4_k_m.gguf -c 2048
52
  ```
53
 
54
+ ### The Ship's Computer:
55
+
56
+ [whisper_dictation](https://github.com/themanyone/whisper_dictation)
57
+
58
+ Interact with this model by speaking to it. Lean, fast, & private, networked speech to text, AI images, multi-modal voice chat, control apps, webcam, and sound with less than 4GiB of VRAM.
59
+
60
+ ```bash
61
+ git clone -b main --single-branch https://github.com/themanyone/whisper_dictation.git
62
+ pip install -r whisper_dictation/requirements.txt
63
+
64
+ git clone https://github.com/ggerganov/whisper.cpp
65
+ cd whisper.cpp
66
+ GGML_CUDA=1 make -j # assuming CUDA is available. see docs
67
+ ln -s server ~/.local/bin/whisper_cpp_server # (just put it somewhere in $PATH)
68
+
69
+ whisper_cpp_server -l en -m models/ggml-tiny.en.bin --port 7777
70
+ cd whisper_dictation
71
+ ./whisper_cpp_client.py
72
+ ```
73
+ See [the docs](https://github.com/themanyone/whisper_dictation) for tips on integrating with llama.cpp server, enabling the computer to talk back, draw AI images, carry out voice commands, and other features.
74
+
75
+ ### Install Llama.cpp via git:
76
 
77
  Step 1: Clone llama.cpp from GitHub.
78
  ```