|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- rockship/quizgen |
|
language: |
|
- en |
|
base_model: |
|
- meta-llama/Llama-3.2-1B-Instruct |
|
- vietphuon/Llama-3.2-1B-Instruct-alpaca-then-quizgen-16bit |
|
--- |
|
|
|
# What's new? |
|
- This is the pre-compiled version of Llama-3.2-1B-Instruct finetuned on our synthetic-hybrid QuizGen dataset for searving as a LLM endpoint in AWS Sagemaker |
|
- The fine-tuning was performed by Unsloth in quantized 4-bit style then merged to 16-bit (AWS tutorial only show how to compile 16-bit model) |
|
- Noted that if you want to infer on this model, you need to load the tokenizer from the base model at https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct |