MaziyarPanahi commited on
Commit
4240c52
1 Parent(s): 9ed140b

Upload folder using huggingface_hub (#1)

Browse files

- 7c713c9e02c329fe0b1605310b2991d8d9e38fff45a5d84fdd791ef61ba8a891 (eec24c0faf5e51ef782c40a18069efce46acaaef)
- 5fec96ff49cb912ddafc00de7cce59e67f8f7b8f78356e3e1333fc1923234403 (fd79f862053329ea499415a1ccba6850062dd18d)
- b94011be4a6cba3be81bb88440a855f63aee356935d40ecaa14d9fb7324daaf7 (0d8f75acc471732bc38a0aff50a88dab47388db6)
- bb0720c0654d4462a76bd3dd6c2122705a3b8148555d481c63fecc2f040011c5 (bd5d1be796e9483b7279e89232b9d37b11fe852f)
- 7a4c1828443f1f122aa76d58197ad29664930da72a20dbb80972689d2c6132fa (a1ed170bb4d856c778a89a130144122f83aa11d2)
- aa8c6416b102618d615cb335a62ed59361cd283fdeacaf258bec044dccf08bd6 (02ca9002ab581984b915de40944c2bb5bc2c71c5)

.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Hermes2-Gutenberg2-Mistral-7B.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Hermes2-Gutenberg2-Mistral-7B.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Hermes2-Gutenberg2-Mistral-7B.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Hermes2-Gutenberg2-Mistral-7B.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Hermes2-Gutenberg2-Mistral-7B.fp16.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Hermes2-Gutenberg2-Mistral-7B-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
Hermes2-Gutenberg2-Mistral-7B-GGUF_imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bdaa3bbed84c98b1e10301e9dbbccbde423f6a59aa876f61d6735610d855fbb
3
+ size 4988146
Hermes2-Gutenberg2-Mistral-7B.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:822a688abbb254c1d36ab1fd069a96ce990aa123dca4280dee7c9a85000ba50b
3
+ size 5131613952
Hermes2-Gutenberg2-Mistral-7B.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:391f9127cfac213fd4a8604d170a2f2381b48936b644f8659c7d396ce7e3e2bf
3
+ size 4997920512
Hermes2-Gutenberg2-Mistral-7B.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ee8699c4e03f5ebe7890790d3b9397dabe0d5e66ec8b1861e2316e58b4c53ce
3
+ size 5942287104
Hermes2-Gutenberg2-Mistral-7B.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbf3a7d297803b2d4281dcdbe36e687e2022320319cd82e92b75cd5a87495119
3
+ size 7696143104
Hermes2-Gutenberg2-Mistral-7B.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33e8f884cc2fe5d7cde7162e47716f13334400a4f091ebcb34221f9d61d6cbe3
3
+ size 14485262880
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - text-generation
12
+ - text-generation
13
+ model_name: Hermes2-Gutenberg2-Mistral-7B-GGUF
14
+ base_model: nbeerbower/Hermes2-Gutenberg2-Mistral-7B
15
+ inference: false
16
+ model_creator: nbeerbower
17
+ pipeline_tag: text-generation
18
+ quantized_by: MaziyarPanahi
19
+ ---
20
+ # [MaziyarPanahi/Hermes2-Gutenberg2-Mistral-7B-GGUF](https://huggingface.co/MaziyarPanahi/Hermes2-Gutenberg2-Mistral-7B-GGUF)
21
+ - Model creator: [nbeerbower](https://huggingface.co/nbeerbower)
22
+ - Original model: [nbeerbower/Hermes2-Gutenberg2-Mistral-7B](https://huggingface.co/nbeerbower/Hermes2-Gutenberg2-Mistral-7B)
23
+
24
+ ## Description
25
+ [MaziyarPanahi/Hermes2-Gutenberg2-Mistral-7B-GGUF](https://huggingface.co/MaziyarPanahi/Hermes2-Gutenberg2-Mistral-7B-GGUF) contains GGUF format model files for [nbeerbower/Hermes2-Gutenberg2-Mistral-7B](https://huggingface.co/nbeerbower/Hermes2-Gutenberg2-Mistral-7B).
26
+
27
+ ### About GGUF
28
+
29
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
30
+
31
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
32
+
33
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
34
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
35
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
36
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
37
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
38
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
39
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
40
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
41
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
42
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
43
+
44
+ ## Special thanks
45
+
46
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.