Prasanna Dhungana commited on
Commit
ee47987
1 Parent(s): fbd2075

update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -9,6 +9,10 @@ base_model: bigcode/starcoder2-3b
9
  model-index:
10
  - name: finetune_starcoder2_with_R_data
11
  results: []
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,11 +20,11 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # finetune_starcoder2_with_R_data
18
 
19
- This model is a variant of the bigcode/starcoder2-3b architecture, adapted and fine-tuned specifically for generating R programming code.
20
 
21
  ## Model description
22
 
23
- Model is
24
 
25
  ## Intended uses & limitations
26
 
 
9
  model-index:
10
  - name: finetune_starcoder2_with_R_data
11
  results: []
12
+ datasets:
13
+ - bigcode/the-stack
14
+ language:
15
+ - en
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
20
 
21
  # finetune_starcoder2_with_R_data
22
 
23
+ This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b), adapted and fine-tuned specifically for generating R programming code.
24
 
25
  ## Model description
26
 
27
+ This model is a specialized version of the bigcode/starcoder2-3b architecture fine-tuned on a subset of the Stack dataset, focusing solely on R programming language data. The fine-tuning process utilized the PEFT (Parameter Efficient Fine Tuning) method and included loading the model with 4-bit quantization using the LoRA library. It's tailored for generating R programming code, offering optimized performance for tasks within this domain.
28
 
29
  ## Intended uses & limitations
30