Kronikus commited on
Commit
7195b42
1 Parent(s): bb7b363

Add model card

Browse files
Files changed (1) hide show
  1. README.md +118 -0
README.md CHANGED
@@ -1,3 +1,121 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ tags:
4
+ - code
5
+ - chemistry
6
+ - medical
7
+ - quantized
8
+ - 4-bit
9
+ - AWQ
10
+ - text-generation
11
+ - autotrain_compatible
12
+ - endpoints_compatible
13
+ - chatml
14
  license: apache-2.0
15
+ datasets:
16
+ - Locutusque/hyperion-v2.0
17
+ - argilla/distilabel-capybara-dpo-7k-binarized
18
+ language:
19
+ - en
20
+ model_creator: Locutusque
21
+ model_name: Darewin-7B
22
+ model_type: mistral
23
+ pipeline_tag: text-generation
24
+ inference: false
25
+ prompt_template: '<|im_start|>system
26
+
27
+ {system_message}<|im_end|>
28
+
29
+ <|im_start|>user
30
+
31
+ {prompt}<|im_end|>
32
+
33
+ <|im_start|>assistant
34
+
35
+ '
36
+ quantized_by: Suparious
37
  ---
38
+ # Locutusque/NeuralHyperion-2.0-Mistral-7B AWQ
39
+
40
+ **UPLOAD IN PROGRESS**
41
+
42
+ - Model creator: [Locutusque](https://huggingface.co/Locutusque)
43
+ - Original model: [NeuralHyperion-2.0-Mistral-7B](https://huggingface.co/Locutusque/NeuralHyperion-2.0-Mistral-7B)
44
+
45
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6437292ecd93f4c9a34b0d47/9BU30Mh9bOkO2HRBDF8EE.png)
46
+
47
+ ## Model Summary
48
+
49
+ `Locutusque/NeuralHyperion-2.0-Mistral-7B` is a state-of-the-art language model fine-tuned on the Hyperion-v2.0 and distilabel-capybara dataset for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
50
+
51
+ ## How to use
52
+
53
+ ### Install the necessary packages
54
+
55
+ ```bash
56
+ pip install --upgrade autoawq autoawq-kernels
57
+ ```
58
+
59
+ ### Example Python code
60
+
61
+ ```python
62
+ from awq import AutoAWQForCausalLM
63
+ from transformers import AutoTokenizer, TextStreamer
64
+
65
+ model_path = "solidrust/NeuralHyperion-2.0-Mistral-7B-AWQ"
66
+ system_message = "You are Hyperion, incarnated as a powerful AI."
67
+
68
+ # Load model
69
+ model = AutoAWQForCausalLM.from_quantized(model_path,
70
+ fuse_layers=True)
71
+ tokenizer = AutoTokenizer.from_pretrained(model_path,
72
+ trust_remote_code=True)
73
+ streamer = TextStreamer(tokenizer,
74
+ skip_prompt=True,
75
+ skip_special_tokens=True)
76
+
77
+ # Convert prompt to tokens
78
+ prompt_template = """\
79
+ <|im_start|>system
80
+ {system_message}<|im_end|>
81
+ <|im_start|>user
82
+ {prompt}<|im_end|>
83
+ <|im_start|>assistant"""
84
+
85
+ prompt = "You're standing on the surface of the Earth. "\
86
+ "You walk one mile south, one mile west and one mile north. "\
87
+ "You end up exactly where you started. Where are you?"
88
+
89
+ tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
90
+ return_tensors='pt').input_ids.cuda()
91
+
92
+ # Generate output
93
+ generation_output = model.generate(tokens,
94
+ streamer=streamer,
95
+ max_new_tokens=512)
96
+
97
+ ```
98
+
99
+ ### About AWQ
100
+
101
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
102
+
103
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
104
+
105
+ It is supported by:
106
+
107
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
108
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
109
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
110
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
111
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
112
+
113
+ ## Prompt template: ChatML
114
+
115
+ ```plaintext
116
+ <|im_start|>system
117
+ {system_message}<|im_end|>
118
+ <|im_start|>user
119
+ {prompt}<|im_end|>
120
+ <|im_start|>assistant
121
+ ```