nenad1002 commited on
Commit
fecb757
1 Parent(s): ad0787c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -129,7 +129,7 @@ Given that GPT-4-turbo was already used in this context for the reference questi
129
  | **ROUGE-2**| 0.4098 | 0.1751 | 0.3104 |
130
  | **ROUGE-L**| 0.5809 | 0.2902 | 0.4856 |
131
 
132
- _quantum-research-bot-v1.0_ outperformed on all metrics, although _Gemini_ came close in ROUGE-L precision with the difference of only 0.001.
133
 
134
  Most other metrics, such as TruthfulQA, MMLU, and similar benchmarks, are not applicable here because this model has been fine-tuned for a very specific domain of knowledge.
135
 
@@ -176,7 +176,7 @@ For most workloads:
176
  1 x RTX A6000
177
  16 vCPU 62 GB RAM
178
 
179
- However, when fine tuning `meta-llama/Meta-Llama-3-70B-Instruct` quantization was applied, and I've used 4xA100. Since this did not yield much improvements, and it was very costly, I decided to stick to model with fewer parameters.
180
 
181
 
182
  #### Hardware
 
129
  | **ROUGE-2**| 0.4098 | 0.1751 | 0.3104 |
130
  | **ROUGE-L**| 0.5809 | 0.2902 | 0.4856 |
131
 
132
+ _quantum-research-bot-v1.0_ outperformed on all metrics, although _Gemini_ came close in BERTScore precision with the difference of only 0.001.
133
 
134
  Most other metrics, such as TruthfulQA, MMLU, and similar benchmarks, are not applicable here because this model has been fine-tuned for a very specific domain of knowledge.
135
 
 
176
  1 x RTX A6000
177
  16 vCPU 62 GB RAM
178
 
179
+ However, when fine tuning `meta-llama/Meta-Llama-3-70B-Instruct`, I've applied quantization with 4xA100 GPUs. Since this did not yield much improvements, and it was very costly, I decided to stick to models with fewer parameters.
180
 
181
 
182
  #### Hardware