HaileyStorm commited on
Commit
18df521
·
verified ·
1 Parent(s): 3fcafcb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
 
13
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). It is a prune of Meta-Llama-3-8B-Instruct down to 20 layers, or about 5.4B models.
14
  Mostly, this is a test of pruning & healing an instruct-tuned model.
15
- This size should allow Q8 or Q6 inference on 6GB VRAM, Q5 inference on 4GB VRAM, and full-weight fine-tuning ... well, with less VRAM than an 8B model.
16
 
17
  ## Merge Details
18
  ### Merge Method
 
12
 
13
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). It is a prune of Meta-Llama-3-8B-Instruct down to 20 layers, or about 5.4B models.
14
  Mostly, this is a test of pruning & healing an instruct-tuned model.
15
+ This size should allow bf16 inference on 24GB VRAM, Q8 or Q6 inference on 6GB VRAM, Q5 inference on 4GB VRAM, and fine-tuning ... well, with less VRAM than an 8B model.
16
 
17
  ## Merge Details
18
  ### Merge Method