arhamk commited on
Commit
47f44f4
1 Parent(s): ec24afe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -3
README.md CHANGED
@@ -1,10 +1,25 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
 
 
 
 
 
3
  ---
4
- ## Training procedure
5
 
 
 
 
 
 
6
 
7
  The following `bitsandbytes` quantization config was used during training:
 
8
  - quant_method: bitsandbytes
9
  - load_in_8bit: False
10
  - load_in_4bit: True
@@ -15,7 +30,7 @@ The following `bitsandbytes` quantization config was used during training:
15
  - bnb_4bit_quant_type: nf4
16
  - bnb_4bit_use_double_quant: False
17
  - bnb_4bit_compute_dtype: float16
18
- ### Framework versions
19
 
 
20
 
21
- - PEFT 0.6.0.dev0
 
1
  ---
2
  library_name: peft
3
+ license: mit
4
+ tags:
5
+ - llama2
6
+ - quantization
7
+ - nlp
8
+ - transformers
9
+ - language-model
10
+ - bitsandbytes
11
+ - fine-tuned
12
+ - causal-lm
13
  ---
 
14
 
15
+ ## Overview
16
+
17
+ This model is a fine-tuned model based on the "TinyPixel/Llama-2-7B-bf16-sharded" model and "timdettmers/openassistant-guanaco" dataset. It is optimized for causal language modeling tasks with specific quantization configurations. The model is trained using the PEFT framework and leverages the `bitsandbytes` quantization method.
18
+
19
+ ## Training Procedure
20
 
21
  The following `bitsandbytes` quantization config was used during training:
22
+
23
  - quant_method: bitsandbytes
24
  - load_in_8bit: False
25
  - load_in_4bit: True
 
30
  - bnb_4bit_quant_type: nf4
31
  - bnb_4bit_use_double_quant: False
32
  - bnb_4bit_compute_dtype: float16
 
33
 
34
+ ### Framework Versions
35
 
36
+ The model was trained using PEFT version 0.6.0.dev0.