|
--- |
|
library_name: peft |
|
tags: |
|
- nlp |
|
- code |
|
- instruct |
|
- llama |
|
datasets: |
|
- Intel/orca_dpo_pairs |
|
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct |
|
license: apache-2.0 |
|
language: |
|
- en |
|
pipeline_tag: feature-extraction |
|
--- |
|
|
|
### Finetuning Overview: |
|
|
|
**Model Used:** meta-llama/Meta-Llama-3.1-8B-Instruct |
|
**Dataset:** Intel/orca_dpo_pairs |
|
|
|
#### Dataset Insights: |
|
|
|
The Intel Orca dataset is a specialized version of the OpenOrca dataset, which includes ~1M GPT-4 completions and ~3.2M GPT-3.5 completions. This dataset is tabularized to align with the distributions in the ORCA paper and focuses on preference optimization by clearly indicating which responses are good and which are bad. It is primarily used in natural language processing for training and evaluation. |
|
|
|
#### Finetuning Details: |
|
|
|
This finetuning run was performed using [MonsterAPI](https://monsterapi.ai)'s LLM finetuner with ORPO (Optimized Response Preference Optimization) for enhancing preference optimization. |
|
|
|
- Completed in a total duration of 1 hour and 39 minutes for 1 epoch. |
|
- Costed `$2.69` for the entire process. |
|
|
|
#### Hyperparameters & Additional Details: |
|
|
|
- **Epochs:** 1 |
|
- **Cost Per Epoch:** $2.69 |
|
- **Total Finetuning Cost:** $2.69 |
|
- **Model Path:** meta-llama/Meta-Llama-3.1-8B-Instruct |
|
- **Learning Rate:** 0.001 |
|
- **Data Split:** 90% train 10% validation |
|
- **Gradient Accumulation Steps:** 16 |