Update README.md
Browse files
README.md
CHANGED
@@ -15,8 +15,7 @@ base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
|
|
15 |
|
16 |
# Model Card for Model ID
|
17 |
|
18 |
-
|
19 |
-
|
20 |
|
21 |
|
22 |
## Model Details
|
@@ -87,13 +86,17 @@ Use the code below to get started with the model.
|
|
87 |
|
88 |
### Training Data
|
89 |
|
90 |
-
|
|
|
|
|
91 |
|
92 |
[More Information Needed]
|
93 |
|
94 |
### Training Procedure
|
95 |
|
96 |
-
|
|
|
|
|
97 |
|
98 |
#### Preprocessing [optional]
|
99 |
|
@@ -102,7 +105,19 @@ Use the code below to get started with the model.
|
|
102 |
|
103 |
#### Training Hyperparameters
|
104 |
|
105 |
-
- **Training regime:** [More Information Needed]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
106 |
|
107 |
#### Speeds, Sizes, Times [optional]
|
108 |
|
|
|
15 |
|
16 |
# Model Card for Model ID
|
17 |
|
18 |
+
Quantum Research Bot is a model fined tuned over the latest research data in quantum science. It contains data from the second half of 2024 making it more performant than base models.
|
|
|
19 |
|
20 |
|
21 |
## Model Details
|
|
|
86 |
|
87 |
### Training Data
|
88 |
|
89 |
+
Initially trained on a bit less than 3k entries, it was later expanded t 5k high quality questions and answers to make the best of supervised fine tuning.
|
90 |
+
|
91 |
+
The dataset was generated by crawling the https://quantum-journal.org/ site, and passing data into the OpenAI gpt-4-turbo model with various prompts to ensure high quality data generation.
|
92 |
|
93 |
[More Information Needed]
|
94 |
|
95 |
### Training Procedure
|
96 |
|
97 |
+
Many training procedures were tried alongside with multiple models.
|
98 |
+
|
99 |
+
After exensive grid search, supervised fine tuning of Llama 3.1-8B with LORA+ resulted in the best training and evaluation cross entropy.
|
100 |
|
101 |
#### Preprocessing [optional]
|
102 |
|
|
|
105 |
|
106 |
#### Training Hyperparameters
|
107 |
|
108 |
+
- **Training regime:** [More Information Needed]
|
109 |
+
- bfloat16 precision
|
110 |
+
- LORA rank: 8
|
111 |
+
- LORA alpha: 16
|
112 |
+
- LORA droput: 0.1
|
113 |
+
- Unfreezed nodes are attention, MLP, and embeddings
|
114 |
+
- Optimizer: AdamW
|
115 |
+
- LR: 1e-4
|
116 |
+
- LR scheduler: cosine
|
117 |
+
- NEFT enabled: true
|
118 |
+
- Batch size: 8
|
119 |
+
- Number of epochs: 3
|
120 |
+
|
121 |
|
122 |
#### Speeds, Sizes, Times [optional]
|
123 |
|