mingxilei commited on
Commit
25a166b
·
verified ·
1 Parent(s): 6e1601b

Training in progress, step 500

Browse files
Files changed (4) hide show
  1. README.md +13 -13
  2. model.safetensors +1 -1
  3. tokenizer.json +1 -6
  4. training_args.bin +1 -1
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  library_name: transformers
3
- base_model: siebert/sentiment-roberta-large-english
 
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -15,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # roberta-imdb
17
 
18
- This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.1137
21
- - Accuracy: 0.9612
22
 
23
  ## Model description
24
 
@@ -37,25 +38,24 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - learning_rate: 2e-05
41
- - train_batch_size: 64
42
- - eval_batch_size: 64
43
  - seed: 42
44
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
- - lr_scheduler_type: cosine
46
- - lr_scheduler_warmup_steps: 30
47
  - num_epochs: 1
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
- | 0.0924 | 1.0 | 391 | 0.1137 | 0.9612 |
54
 
55
 
56
  ### Framework versions
57
 
58
- - Transformers 4.46.3
59
  - Pytorch 2.5.1+cu124
60
- - Datasets 3.1.0
61
- - Tokenizers 0.20.3
 
1
  ---
2
  library_name: transformers
3
+ license: mit
4
+ base_model: FacebookAI/roberta-large
5
  tags:
6
  - generated_from_trainer
7
  metrics:
 
16
 
17
  # roberta-imdb
18
 
19
+ This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.6932
22
+ - Accuracy: 0.5
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 5e-05
42
+ - train_batch_size: 32
43
+ - eval_batch_size: 32
44
  - seed: 42
45
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
+ - lr_scheduler_type: linear
 
47
  - num_epochs: 1
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | 0.5483 | 1.0 | 782 | 0.6932 | 0.5 |
54
 
55
 
56
  ### Framework versions
57
 
58
+ - Transformers 4.47.1
59
  - Pytorch 2.5.1+cu124
60
+ - Datasets 3.2.0
61
+ - Tokenizers 0.21.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:aa8e58d3d81d701cba4e5279de3669e14d4373a717b6166b59afbb4f4d02f3a8
3
  size 1421491316
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6f0cf9b6d772a4bc8f0b2aff22381c14f71c33b6c6fbde989d348616361e520
3
  size 1421491316
tokenizer.json CHANGED
@@ -1,11 +1,6 @@
1
  {
2
  "version": "1.0",
3
- "truncation": {
4
- "direction": "Right",
5
- "max_length": 512,
6
- "strategy": "LongestFirst",
7
- "stride": 0
8
- },
9
  "padding": null,
10
  "added_tokens": [
11
  {
 
1
  {
2
  "version": "1.0",
3
+ "truncation": null,
 
 
 
 
 
4
  "padding": null,
5
  "added_tokens": [
6
  {
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6fc7050a5e1e31e7e8ca4d4d8bc723bb6ece781c09941114516c1c2f84ffd4a6
3
  size 5304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8af21c62543879f62f52f68967346622e67692c6bee3cae7a19c369c2fb24b3a
3
  size 5304