File size: 4,652 Bytes
acea37b 4f59d09 bace183 9d722c3 bace183 acea37b bace183 acea37b 233b630 f61b492 bace183 acea37b bace183 233b630 b6983f2 dc81424 b6983f2 bace183 b6983f2 233b630 acea37b b6983f2 bace183 acea37b bace183 acea37b bace183 acea37b bace183 acea37b bace183 233b630 bace183 f61b492 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
license: cc-by-nc-nd-4.0
datasets:
- fblgit/tree-of-knowledge
- garage-bAInd/Open-Platypus
- allenai/ultrafeedback_binarized_cleaned
- Open-Orca/OpenOrca
library_name: transformers
tags:
- UNA
- juanako
- cybertron
- xaberius
---
# Model Card for una-xaberius-34b-v1-beta (UNA: Uniform Neural Alignment)
**This is another King-Breed from Juanako.AI**
**We have Identified some Problems with regular Quants** [use these models to play with Xaberius-34B and harness its power in full](https://huggingface.co/models?search=xaberius%20lonestriker).
**Unfortunately we were not able to use any of TheBloke models, seems there is some undesired results out of it.**
Introducing THE MODEL: **XABERIUS 34B v1-BETA** an *experimental* 34B LLaMa-Yi-34B based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets.
Timeline:
* 05-Dec-2023 **v1-beta released**
* 08-Dec-2023 **Evaluation been "RUNNING" for 2 days.. no results yet**
* 09-Dec-2023 **Evaluation been "FINISHED", confirming #1 spot** outperforming the contaminated-disqualified tigerbot :)
[Results Here](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__una-xaberius-34b-v1beta/blob/main/results_2023-12-09T11-16-37.904970.json)
Sidenote: Tests took 19H to run, wonder what happened in the 48H that HF held this one.. interim releasing manually other results??..
| Model | Average | ARC (25-s) | HellaSwag (10-s) | MMLU (5-s) | TruthfulQA (MC) (0-s) | Winogrande (5-s) | GSM8K (5-s) |
| --- | --- | --- | --- | --- | --- | --- | --- |
| [fblgit/una-cybertron-7b-v1-fp16](https://huggingface.co/fblgit/una-cybertron-7b-v1-fp16) | **69.49** | **68.43** | **85.85** | 63.34 | **63.28** | **80.90** | **55.12** |
| [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) | **69.67** | **68.26** | **85.?4** | 63.23 | **64.63** | **81.37** | **55.04** |
| [fblgit/una-xaberius-34b-v1beta](https://huggingface.co/fblgit/una-xaberius-34b-v1beta) | **74.18** | **70.39** | **86.77** | **78.15** | **61.45** | **84.93** | **63.38** |
## Evaluations
- Scores **74.21** Outperforming former leader tigerbot-70b-chat and landing on #1 position of HuggingFace LeaderBoard: 08 December 2023.
- Scores **79.13** in MMLU, setting a new record not just for 34B but also for all OpenSource LLM's :)
SideNote: MMLU was a very solid 79+ .. weird, we'll dive further on this for irregularities :)
## Model Details
Adiestrated with UNA: Uniform Neural Alignment technique (paper going out soon).
* What is **NOT** UNA? Its not a merged layers model. Is not SLERP or SLURP or similar.
* What **is** UNA? A formula & A technique to *TAME* models
* When will be released the code and paper? When have time, contribute and it'll be faster.
### Model Description
- **Developed by:** [juanako.ai](https://juanako.ai)
- **Author:** [Xavier M.]([email protected])
- **Investors** [CONTACT HERE]([email protected])
- **Model type:** LLaMa YI-34B
- **Funded by Cybertron's H100's** with few hours training.
### Prompt
The model is very good, works well on almost any prompt but ChatML format and Alpaca System gets the best
```
<|im_start|>system
- You are a helpful assistant chatbot trained by MosaicML.
- You answer questions.
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>
<|im_start|>user
Explain QKV<|im_end|>
<|im_start|>assistant
```
```
### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat!
### Human: Explain QKV
### Assistant:
```
```
[Round <|round|>]
问:Explain QKV
答:
```
```
[Round <|round|>]
Question:Explain QKV
Answer:
```
```
Question:Explain QKV
Answer:
```
### Framework versions
- Transformers 4.35.2-UNA
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
### Citations
If you find Xaberius, Cybertron, Juanako or any of our models useful, specially if you use it for your big brand or you cloning/merge/SLERP my modelsm, cite please:
```
@misc{unaxaberius34b,
title={Xaberius 34B: Uniform Neural Alignment},
author={Xavier Murias},
year={2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/fblgit/una-xaberius-34b-v1beta}},
}
```
**Thanks to LoneStriker for his ExLLama2 models of high quality that works properly.**
**Enormous Ku2 to Yi-34b Team for the outstanding model, UNA is only as good as its pre-trained model** THANK YOU! |