Locutusque commited on
Commit
85dc765
1 Parent(s): 393d28d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -6,8 +6,6 @@ language:
6
  - en
7
  pipeline_tag: text-generation
8
  ---
9
- A pre-trained language model based on the Mistral 7B model, shrunk to approximately 248 million parameters, required minimal training. Convergence was achieved with only 250,000 examples over 125,000 steps. This model is not intended for direct use but rather for fine-tuning on a downstream task.
10
 
11
- During evaluation on InstructMix, this model achieved an average perplexity score of 6.3
12
-
13
- More training sessions will come for this model.
 
6
  - en
7
  pipeline_tag: text-generation
8
  ---
9
+ A pre-trained language model, based on the Mistral 7B model, has been scaled down to approximately 248 million parameters. Currently, it's been trained on 250,000 examples over 125,000 steps within the first epoch. The batch size is ramped up from 2 to 16 for future epochs. This model isn't intended for direct use but for fine-tuning on a downstream task.
10
 
11
+ During evaluation on InstructMix, this model achieved an average perplexity score of 6.3. More training sessions are planned for this model.