Transformers
GGUF
English
Generated from Trainer
Inference Endpoints
conversational
munish0838 commited on
Commit
4c98713
·
verified ·
1 Parent(s): a6590c0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +119 -0
README.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ language:
5
+ - en
6
+ license: other
7
+ license_name: qwen
8
+ license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
9
+ library_name: transformers
10
+ tags:
11
+ - generated_from_trainer
12
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
13
+ model-index:
14
+ - name: miniclaus-qw1.5B-UNAMGS
15
+ results: []
16
+ datasets:
17
+ - Magpie-Align/Magpie-Pro-MT-300K-v0.1
18
+
19
+ ---
20
+
21
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
22
+
23
+
24
+ # QuantFactory/miniclaus-qw1.5B-UNAMGS-GGUF
25
+ This is quantized version of [fblgit/miniclaus-qw1.5B-UNAMGS](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS) created using llama.cpp
26
+
27
+ # Original Model Card
28
+
29
+
30
+ # miniclaus-qw1.5B-UNAMGS
31
+
32
+ Trained with `Magpie-Align/Magpie-Pro-MT-300K-v0.1`
33
+
34
+ Using MGS & UNA (MLP) on this tiny but powerful model.
35
+
36
+ ![miniclaus-qw1.5B-UNAMGS](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS/resolve/main/miniclaus_qw15-UNAMGS.png)
37
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
38
+
39
+ It achieves the following results on the evaluation set:
40
+ - Loss: 0.7193
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - train_batch_size: 1
48
+ - seed: 42
49
+ - distributed_type: multi-GPU
50
+ - num_devices: 8
51
+ - total_train_batch_size: 128
52
+ - total_eval_batch_size: 8
53
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
+ - num_epochs: 1
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss |
59
+ |:-------------:|:------:|:----:|:---------------:|
60
+ | 1.1641 | 0.0007 | 1 | 0.8514 |
61
+ | 0.9246 | 0.0503 | 76 | 0.7921 |
62
+ | 0.8791 | 0.1006 | 152 | 0.7727 |
63
+ | 0.8507 | 0.1509 | 228 | 0.7611 |
64
+ | 0.8376 | 0.2012 | 304 | 0.7534 |
65
+ | 0.793 | 0.2515 | 380 | 0.7467 |
66
+ | 0.7834 | 0.3018 | 456 | 0.7421 |
67
+ | 0.7807 | 0.3521 | 532 | 0.7384 |
68
+ | 0.764 | 0.4023 | 608 | 0.7359 |
69
+ | 0.7738 | 0.4526 | 684 | 0.7320 |
70
+ | 0.7425 | 0.5029 | 760 | 0.7300 |
71
+ | 0.7519 | 0.5532 | 836 | 0.7279 |
72
+ | 0.7461 | 0.6035 | 912 | 0.7255 |
73
+ | 0.7489 | 0.6538 | 988 | 0.7245 |
74
+ | 0.7614 | 0.7041 | 1064 | 0.7222 |
75
+ | 0.7576 | 0.7544 | 1140 | 0.7222 |
76
+ | 0.7303 | 0.8047 | 1216 | 0.7209 |
77
+ | 0.7332 | 0.8550 | 1292 | 0.7199 |
78
+ | 0.7541 | 0.9053 | 1368 | 0.7202 |
79
+ | 0.7369 | 0.9556 | 1444 | 0.7193 |
80
+
81
+
82
+ ### Framework versions
83
+
84
+ - PEFT 0.13.2
85
+ - Transformers 4.45.2
86
+ - Pytorch 2.3.0+cu121
87
+ - Datasets 3.0.1
88
+ - Tokenizers 0.20.1
89
+
90
+ ## Thanks
91
+ - Qwen Team for their outstanding model
92
+ - MagPie Team for contributing plenty of datasets
93
+ - Cybertron Cloud Compute
94
+
95
+ ## Citations
96
+ ```
97
+ @misc{miniclaus-qw15,
98
+ title={MiniClaus: 1.5B UNAMGS},
99
+ author={Xavier Murias},
100
+ year={2024},
101
+ publisher = {HuggingFace},
102
+ journal = {HuggingFace repository},
103
+ howpublished = {\url{https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS}},
104
+ }
105
+
106
+ @misc{qwen2.5,
107
+ title = {Qwen2.5: A Party of Foundation Models},
108
+ url = {https://qwenlm.github.io/blog/qwen2.5/},
109
+ author = {Qwen Team},
110
+ month = {September},
111
+ year = {2024}
112
+ }
113
+ @article{qwen2,
114
+ title={Qwen2 Technical Report},
115
+ author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
116
+ journal={arXiv preprint arXiv:2407.10671},
117
+ year={2024}
118
+ }
119
+ ```