rubenroy commited on
Commit
930965e
·
verified ·
1 Parent(s): 17a9504

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +123 -7
README.md CHANGED
@@ -1,22 +1,138 @@
1
  ---
2
- base_model: unsloth/qwen2.5-14b-instruct-bnb-4bit
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
  - unsloth
7
  - qwen2
8
  - trl
 
 
 
 
9
  license: apache-2.0
10
  language:
11
  - en
 
 
 
 
12
  ---
13
 
14
- # Uploaded model
15
 
16
- - **Developed by:** rubenroy
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/qwen2.5-14b-instruct-bnb-4bit
19
 
20
- This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: Qwen/Qwen2.5-14B-Instruct
3
  tags:
4
  - text-generation-inference
5
  - transformers
6
  - unsloth
7
  - qwen2
8
  - trl
9
+ - gammacorpus
10
+ - zurich
11
+ - chat
12
+ - conversational
13
  license: apache-2.0
14
  language:
15
  - en
16
+ datasets:
17
+ - rubenroy/GammaCorpus-v2-5m
18
+ pipeline_tag: text-generation
19
+ library_name: transformers
20
  ---
21
 
22
+ ![Zunich Banner](https://cdn.ruben-roy.com/AI/Zurich/img/banner-14B-5m.png)
23
 
24
+ # Zurich 14B GammaCorpus v2-5m
25
+ *A Qwen 2.5 model fine-tuned on the GammaCorpus dataset*
 
26
 
27
+ ## Overview
28
+ Zurich 14B GammaCorpus v2-5m is a fine-tune of Alibaba's **Qwen 2.5 14B Instruct** model. Zurich is designed to outperform other models that have a similar size while also showcasing [GammaCorpus v2-5m](https://huggingface.co/datasets/rubenroy/GammaCorpus-v2-5m).
29
 
30
+ ## Model Details
31
+ - **Base Model:** [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
32
+ - **Type:** Causal Language Models
33
+ - **Architecture:** Transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
34
+ - **Number of Parameters:** 14.7B
35
+ - **Number of Paramaters (Non-Embedding):** 13.1B
36
+ - **Number of Layers:** 48
37
+ - **Number of Attention Heads (GQA):** 40 for Q and 8 for KV
38
+
39
+ ## Training Details
40
+
41
+ Zurich-14B-GCv2-5m underwent fine-tuning with 1 A100 GPU for ~90 minutes and trained with the [Unsloth](https://unsloth.ai/) framework. Zurich-14B-GCv2-5m was trained for **60 Epochs**.
42
+
43
+ ## Usage
44
+
45
+ ### Requirements
46
+
47
+ We **strongly** recommend you use the latest version of the `transformers` package. You may install it via `pip` as follows:
48
+
49
+ ```
50
+ pip install transformers
51
+ ```
52
+
53
+ ### Quickstart
54
+
55
+ Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents;
56
+
57
+ ```python
58
+ from transformers import AutoModelForCausalLM, AutoTokenizer
59
+
60
+ model_name = "rubenroy/Zurich-14B-GCv2-5m"
61
+
62
+ model = AutoModelForCausalLM.from_pretrained(
63
+ model_name,
64
+ torch_dtype="auto",
65
+ device_map="auto"
66
+ )
67
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
68
+
69
+ prompt = "How tall is the Eiffel tower?"
70
+ messages = [
71
+ {"role": "system", "content": "You are Zurich, an AI assistant built on the Qwen 2.5 14B model developed by Alibaba Cloud, and fine-tuned by Ruben Roy. You are a helpful assistant."},
72
+ {"role": "user", "content": prompt}
73
+ ]
74
+ text = tokenizer.apply_chat_template(
75
+ messages,
76
+ tokenize=False,
77
+ add_generation_prompt=True
78
+ )
79
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
80
+
81
+ generated_ids = model.generate(
82
+ **model_inputs,
83
+ max_new_tokens=512
84
+ )
85
+ generated_ids = [
86
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
87
+ ]
88
+
89
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
90
+ ```
91
+
92
+ ## About GammaCorpus
93
+
94
+ This model, and all Zurich models, are trained with GammaCorpus. GammaCorpus is a dataset on HuggingFace that is filled with structured and filtered multi-turn conversations.
95
+ GammaCorpus has 4 version with different sizes in each. These are the following versions and sizes:
96
+
97
+ ### GammaCorpus v1
98
+ - 10k UNFILTERED
99
+ - 50k UNFILTERED
100
+ - 70k UNFILTERED
101
+
102
+ Here is a link to the GCv1 dataset collection:<br>
103
+ https://huggingface.co/collections/rubenroy/gammacorpus-v1-67935e4e52a04215f15a7a60
104
+
105
+ ### GammaCorpus v2
106
+ - 10k
107
+ - 50k
108
+ - 100k
109
+ - 500k
110
+ - 1m
111
+ - **5m <-- This is the version of GammaCorpus v2 that the Zurich model you are using was trained on.**
112
+
113
+ Here is a link to the GCv2 dataset collection:<br>
114
+ https://huggingface.co/collections/rubenroy/gammacorpus-v2-67935e895e1259c404a579df
115
+
116
+ ### GammaCorpus CoT
117
+ - Math 170k
118
+
119
+ Here is a link to the GC-CoT dataset collection:<br>
120
+ https://huggingface.co/collections/rubenroy/gammacorpus-cot-6795bbc950b62b1ced41d14f
121
+
122
+ ### GammaCorpus QA
123
+ - Fact 450k
124
+
125
+ Here is a link to the GC-QA dataset collection:<br>
126
+ https://huggingface.co/collections/rubenroy/gammacorpus-qa-679857017bb3855234c1d8c7
127
+
128
+ ### The link to the full GammaCorpus dataset collection can be found [here](https://huggingface.co/collections/rubenroy/gammacorpus-67765abf607615a0eb6d61ac).
129
+
130
+ ## Known Limitations
131
+
132
+ - **Bias:** We have tried our best to mitigate as much bias we can, but please be aware of the possibility that the model might generate some biased answers.
133
+
134
+ ## Additional Information
135
+
136
+ ### Licensing Information
137
+
138
+ The model is released under the **[Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0)**. Please refer to the license for usage rights and restrictions.