maddes8cht commited on
Commit
3639d82
·
1 Parent(s): fd8df11

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -10,6 +10,7 @@ These will contain increasingly more content to help find the best models for a
10
  # openbuddy-falcon-7b-v6-bf16 - GGUF
11
  - Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
12
  - Original model: [openbuddy-falcon-7b-v6-bf16](https://huggingface.co/OpenBuddy/openbuddy-falcon-7b-v6-bf16)
 
13
  ## Note:
14
 
15
  This is v6 of OpenBuddy's Falcon-7b Variant. Somehow they forgot to provide a real `Model Card` for v6, so refer to the v5 `Model Card` instead:
@@ -21,6 +22,8 @@ OpenBuddy provides strong multiligual Model variants. On their Huggingface Organ
21
  > Our mission with OpenBuddy is to provide a free, open, and offline-capable AI model that operates on users' devices, irrespective of their language or cultural background. We strive to empower individuals worldwide to access and benefit from AI technology.
22
 
23
 
 
 
24
  # About GGUF format
25
 
26
  `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
@@ -44,6 +47,7 @@ So, if possible, use K-quants.
44
  With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
45
 
46
 
 
47
  # Original Model Card:
48
  <center>
49
  [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
 
10
  # openbuddy-falcon-7b-v6-bf16 - GGUF
11
  - Model creator: [OpenBuddy](https://huggingface.co/OpenBuddy)
12
  - Original model: [openbuddy-falcon-7b-v6-bf16](https://huggingface.co/OpenBuddy/openbuddy-falcon-7b-v6-bf16)
13
+
14
  ## Note:
15
 
16
  This is v6 of OpenBuddy's Falcon-7b Variant. Somehow they forgot to provide a real `Model Card` for v6, so refer to the v5 `Model Card` instead:
 
22
  > Our mission with OpenBuddy is to provide a free, open, and offline-capable AI model that operates on users' devices, irrespective of their language or cultural background. We strive to empower individuals worldwide to access and benefit from AI technology.
23
 
24
 
25
+
26
+
27
  # About GGUF format
28
 
29
  `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
 
47
  With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
48
 
49
 
50
+
51
  # Original Model Card:
52
  <center>
53
  [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)