aashish1904 commited on
Commit
b7e7a39
1 Parent(s): b872465

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+ base_model: distilgpt2
6
+ tags:
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: distilgpt2-finetuned-python_code_instructions_18k_alpaca
10
+ results: []
11
+ datasets:
12
+ - iamtarun/python_code_instructions_18k_alpaca
13
+ language:
14
+ - en
15
+ metrics:
16
+ - accuracy
17
+ library_name: transformers
18
+ pipeline_tag: text-generation
19
+
20
+ ---
21
+
22
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
23
+
24
+
25
+ # QuantFactory/distilgpt2-finetuned-python_code_instructions_18k_alpaca-GGUF
26
+ This is quantized version of [Vishaltiwari2019/distilgpt2-finetuned-python_code_instructions_18k_alpaca](https://huggingface.co/Vishaltiwari2019/distilgpt2-finetuned-python_code_instructions_18k_alpaca) created using llama.cpp
27
+
28
+ # Original Model Card
29
+
30
+
31
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
+ should probably proofread and complete it, then remove this comment. -->
33
+
34
+ # distilgpt2-finetuned-python_code_instructions_18k_alpaca
35
+
36
+ This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
37
+ It achieves the following results on the evaluation set:
38
+ - Loss: 1.5063
39
+
40
+ ## Model description
41
+
42
+ More information needed
43
+
44
+ ## Intended uses & limitations
45
+
46
+ More information needed
47
+
48
+ ## Training and evaluation data
49
+
50
+ More information needed
51
+
52
+ ## Training procedure
53
+
54
+ ### Training hyperparameters
55
+
56
+ The following hyperparameters were used during training:
57
+ - learning_rate: 2e-05
58
+ - train_batch_size: 8
59
+ - eval_batch_size: 8
60
+ - seed: 42
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - num_epochs: 3
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss |
68
+ |:-------------:|:-----:|:-----:|:---------------:|
69
+ | 1.7264 | 1.0 | 3861 | 1.5890 |
70
+ | 1.6046 | 2.0 | 7722 | 1.5214 |
71
+ | 1.5359 | 3.0 | 11583 | 1.5063 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - Transformers 4.39.3
77
+ - Pytorch 2.2.1+cu121
78
+ - Datasets 2.18.0
79
+ - Tokenizers 0.15.2