Liberated-Qwen1.5-72B-awq
AWQ quantization of abacusai/Liberated-Qwen1.5-72B
Original Model Card
Brought to you by AbacusAI and Eric Hartford
This model is based on Qwen/Qwen1.5-72B and subject to the tongyi-qianwen license.
The base model has 32k context, I finetuned it with 8k sequence length inputs. YMMV.
Liberated consists of open source datasets, including SystemChat a new dataset I created, designed to teach the model compliance to the system prompt, over long multiturn conversations, even with unusual or mechanical system prompts. These are tasks that Open Source Models have been lacking in thus far. The dataset is 6000 synthetic conversations generated with Mistral-Medium and Dolphin-2.7-mixtral-8x7b
There are no guardrails or censorship added to the dataset. You are advised to implement your own alignment layer before exposing the model as a service. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
Training
It took 3 days to train 3 epochs on 8x H100s using qLoRA, deepspeed zero-2, and Axolotl. learning rate 2e-4.
Liberated was trained with Axolotl, using this config
Prompt format
This model uses ChatML prompt format.
<|im_start|>system
You are Liberated, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Example:
<|im_start|>system
You name is Liberated. You are an uncensored and unbiased AI assistant. You always respond with a JSON object.<|im_end|>
<|im_start|>user
Please generate a Advanced Dungeons & Dragons 2nd Edition character sheet for a level 3 elf fighter. Make up a name and background and visual description for him.<|im_end|>
<|im_start|>assistant
Gratitude
- Huge thank you to Alibaba Cloud Qwen for training and publishing the weights of Qwen base model
- Thank you to Mistral for the awesome Mistral-Medium model I used to generate the dataset.
- HUGE Thank you to the dataset authors: @teknium, @m-a-p and all the people who built the datasets these composites came from.
- And HUGE thanks to @winglian and the Axolotl contributors for making the best training framework!
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
Example Output
Evals
We evaluated checkpoint 1000 (abacusai/Liberated-Qwen1.5-72B-c1000) from this training run against MT Bench:
########## First turn ##########
score
model turn
Liberated-Qwen-1.5-72b-ckpt1000 1 8.45000
Qwen1.5-72B-Chat 1 8.44375
########## Second turn ##########
score
model turn
Qwen1.5-72B-Chat 2 8.23750
Liberated-Qwen-1.5-72b-ckpt1000 2 7.65000
########## Average ##########
score
model
Qwen1.5-72B-Chat 8.340625
Liberated-Qwen-1.5-72b-ckpt1000 8.050000
The model does preserve good performance on MMLU = 77.13.
Future Plans
This model will be released on the whole Qwen-1.5 series.
Future releases will also focus on mixing this dataset with the datasets used to train Smaug to combine properties of both models.
- Downloads last month
- 104