Update README.md
Browse files
README.md
CHANGED
@@ -83,7 +83,7 @@ For ReFT, the nodes in the last 8 layers were unfrozen with attention to allow t
|
|
83 |
|
84 |
After 3 to 4 epochs, the model began to overfit regardless of the strategies employed. Increasing both batch size and the number of epochs resulted in higher final training and evaluation cross-entropy.
|
85 |
|
86 |
-
Following an extensive grid search, supervised fine-tuning of [Llama 3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) with LoRA+ and the parameters mentioned below yielded the best training and evaluation cross-entropy.
|
87 |
|
88 |
#### Preprocessing [optional]
|
89 |
|
@@ -105,7 +105,7 @@ Following an extensive grid search, supervised fine-tuning of [Llama 3.1-8B](htt
|
|
105 |
- Number of epochs: 4
|
106 |
|
107 |
|
108 |
-
#### Speeds, Sizes, Times
|
109 |
|
110 |
This model was trained on ~550 million parameters on a training that lasted a bit more than 30 minutes and went through 4 epochs. The GPU utilization was above 90% at all times during training.
|
111 |
|
@@ -120,9 +120,9 @@ The final evaluation cross-entropy ended around 0.4.
|
|
120 |
#### Metrics
|
121 |
|
122 |
Since the fine-tuned model is designed to explain, and if possible, summarize newly learned data, ROUGE and BERTScore metrics were measured on a sample of 50 manually crafted questions. The reference answers were constructed during the creation of the training and evaluation sets.
|
123 |
-
Given that GPT-4-turbo was already used in this context, I did not compare my model against it. Instead, I chose to compare it against the following models:
|
124 |
|
125 |
-
| Metric | quantum-research-bot-v1.0 | Meta-Llama-3.1-8B | gemini-1.5-pro |
|
126 |
|:------------------|:---------------------------|:--------------------|:------------------|
|
127 |
| **BERTScore F1** | 0.5821 | 0.3305 | 0.4982 |
|
128 |
| **ROUGE-1** | 0.6045 | 0.3152 |0.5029 |
|
|
|
83 |
|
84 |
After 3 to 4 epochs, the model began to overfit regardless of the strategies employed. Increasing both batch size and the number of epochs resulted in higher final training and evaluation cross-entropy.
|
85 |
|
86 |
+
Following an extensive grid search, supervised fine-tuning of [Llama 3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) with LoRA+ and the parameters mentioned below yielded the best training and evaluation cross-entropy.
|
87 |
|
88 |
#### Preprocessing [optional]
|
89 |
|
|
|
105 |
- Number of epochs: 4
|
106 |
|
107 |
|
108 |
+
#### Speeds, Sizes, Times
|
109 |
|
110 |
This model was trained on ~550 million parameters on a training that lasted a bit more than 30 minutes and went through 4 epochs. The GPU utilization was above 90% at all times during training.
|
111 |
|
|
|
120 |
#### Metrics
|
121 |
|
122 |
Since the fine-tuned model is designed to explain, and if possible, summarize newly learned data, ROUGE and BERTScore metrics were measured on a sample of 50 manually crafted questions. The reference answers were constructed during the creation of the training and evaluation sets.
|
123 |
+
Given that GPT-4-turbo was already used in this context for the reference questions generation, I did not compare my model against it. Instead, I chose to compare it against the following models:
|
124 |
|
125 |
+
| Metric | quantum-research-bot-v1.0 | Meta-Llama-3.1-8B-Instruct | gemini-1.5-pro |
|
126 |
|:------------------|:---------------------------|:--------------------|:------------------|
|
127 |
| **BERTScore F1** | 0.5821 | 0.3305 | 0.4982 |
|
128 |
| **ROUGE-1** | 0.6045 | 0.3152 |0.5029 |
|