Edit model card

TinyLlama/TinyLlama-1.1B-Chat-v1.0 AWQ

Model Summary

This is the chat model finetuned on top of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T. We follow HF's Zephyr's training recipe. The model was " initially fine-tuned on a variant of the UltraChat dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT. We then further aligned the model with 🤗 TRL's DPOTrainer on the openbmb/UltraFeedback dataset, which contain 64k prompts and model completions that are ranked by GPT-4."

Downloads last month
21
Safetensors
Model size
261M params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for solidrust/TinyLlama-1.1B-Chat-v1.0-AWQ

Quantized
(65)
this model

Datasets used to train solidrust/TinyLlama-1.1B-Chat-v1.0-AWQ

Collection including solidrust/TinyLlama-1.1B-Chat-v1.0-AWQ