shawhin commited on
Commit
bd902e3
1 Parent(s): f4630c3

shawhin/distilbert-base-uncased-lora-text-classification

Browse files
Files changed (2) hide show
  1. README.md +14 -16
  2. training_args.bin +2 -2
README.md CHANGED
@@ -8,7 +8,6 @@ metrics:
8
  model-index:
9
  - name: distilbert-base-uncased-lora-text-classification
10
  results: []
11
- library_name: peft
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.0680
22
- - Accuracy: {'accuracy': 0.885}
23
 
24
  ## Model description
25
 
@@ -50,22 +49,21 @@ The following hyperparameters were used during training:
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:-------------------:|
53
- | No log | 1.0 | 250 | 0.3690 | {'accuracy': 0.87} |
54
- | 0.3946 | 2.0 | 500 | 0.5006 | {'accuracy': 0.849} |
55
- | 0.3946 | 3.0 | 750 | 0.5917 | {'accuracy': 0.865} |
56
- | 0.1803 | 4.0 | 1000 | 0.7957 | {'accuracy': 0.868} |
57
- | 0.1803 | 5.0 | 1250 | 0.7798 | {'accuracy': 0.872} |
58
- | 0.0779 | 6.0 | 1500 | 0.9091 | {'accuracy': 0.877} |
59
- | 0.0779 | 7.0 | 1750 | 1.0116 | {'accuracy': 0.877} |
60
- | 0.0156 | 8.0 | 2000 | 1.1076 | {'accuracy': 0.872} |
61
- | 0.0156 | 9.0 | 2250 | 1.0598 | {'accuracy': 0.885} |
62
- | 0.0085 | 10.0 | 2500 | 1.0680 | {'accuracy': 0.885} |
63
 
64
 
65
  ### Framework versions
66
 
67
- - PEFT 0.5.0
68
  - Transformers 4.32.1
69
- - Pytorch 2.1.0.dev20230905
70
  - Datasets 2.14.4
71
- - Tokenizers 0.13.3
 
8
  model-index:
9
  - name: distilbert-base-uncased-lora-text-classification
10
  results: []
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 1.0684
21
+ - Accuracy: {'accuracy': 0.879}
22
 
23
  ## Model description
24
 
 
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
  |:-------------:|:-----:|:----:|:---------------:|:-------------------:|
52
+ | No log | 1.0 | 250 | 0.4266 | {'accuracy': 0.87} |
53
+ | 0.4232 | 2.0 | 500 | 0.4260 | {'accuracy': 0.88} |
54
+ | 0.4232 | 3.0 | 750 | 0.5071 | {'accuracy': 0.885} |
55
+ | 0.2213 | 4.0 | 1000 | 0.7424 | {'accuracy': 0.875} |
56
+ | 0.2213 | 5.0 | 1250 | 0.7885 | {'accuracy': 0.881} |
57
+ | 0.067 | 6.0 | 1500 | 0.9312 | {'accuracy': 0.872} |
58
+ | 0.067 | 7.0 | 1750 | 0.9669 | {'accuracy': 0.874} |
59
+ | 0.0238 | 8.0 | 2000 | 1.0856 | {'accuracy': 0.874} |
60
+ | 0.0238 | 9.0 | 2250 | 1.0637 | {'accuracy': 0.88} |
61
+ | 0.0066 | 10.0 | 2500 | 1.0684 | {'accuracy': 0.879} |
62
 
63
 
64
  ### Framework versions
65
 
 
66
  - Transformers 4.32.1
67
+ - Pytorch 2.0.1
68
  - Datasets 2.14.4
69
+ - Tokenizers 0.13.2
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:25a37fe8a90cd601e561e11156e8dc205cb7dc3c27fd68c66a9b7d3008be46f0
3
- size 4600
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a59b2d73bb0c596355af33595ac92a84ab94f22a9402c14fb809d5045d017e1a
3
+ size 4091