GGUF
Russian
conversational
IlyaGusev commited on
Commit
c159cd8
1 Parent(s): ad6768d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - IlyaGusev/saiga_scored
4
+ - IlyaGusev/saiga_preferences
5
+ language:
6
+ - ru
7
+ inference: false
8
+ license: apache-2.0
9
+ ---
10
+
11
+ Llama.cpp compatible versions of an original [12B model](https://huggingface.co/IlyaGusev/saiga_nemo_12b).
12
+
13
+ Download one of the versions, for example `saiga_nemo_12b.Q4_K_M.gguf`.
14
+ ```
15
+ wget https://huggingface.co/IlyaGusev/saiga_nemo_12b_gguf/resolve/main/saiga_nemo_12b.Q4_K_M.gguf
16
+ ```
17
+
18
+ Download [interact_llama3_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llama3_llamacpp.py)
19
+ ```
20
+ wget https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llama3_llamacpp.py
21
+ ```
22
+
23
+ How to run:
24
+ ```
25
+ pip install llama-cpp-python fire
26
+
27
+ python3 interact_llama3_llamacpp.py saiga_nemo_12b.Q4_K_M.gguf
28
+ ```
29
+
30
+ System requirements:
31
+ * 15GB RAM for q8_0 and less for smaller quantizations