File size: 3,058 Bytes
8781c08
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
datasets:
- clupubhealth
metrics:
- rouge
model-index:
- name: pubhealth-expanded-1
  results:
  - task:
      name: Sequence-to-sequence Language Modeling
      type: text2text-generation
    dataset:
      name: clupubhealth
      type: clupubhealth
      config: expanded
      split: test
      args: expanded
    metrics:
    - name: Rouge1
      type: rouge
      value: 28.6755
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# pubhealth-expanded-1

This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the clupubhealth dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3198
- Rouge1: 28.6755
- Rouge2: 9.2869
- Rougel: 21.9675
- Rougelsum: 22.2946
- Gen Len: 19.85

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 120
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1

### Training results

| Training Loss | Epoch | Step | Validation Loss | Rouge1  | Rouge2 | Rougel  | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 3.6788        | 0.08  | 40   | 2.3758          | 29.5273 | 9.3588 | 22.4799 | 22.6212   | 19.835  |
| 3.4222        | 0.15  | 80   | 2.3484          | 29.0821 | 9.1988 | 22.3907 | 22.5996   | 19.88   |
| 3.3605        | 0.23  | 120  | 2.3500          | 29.2893 | 9.296  | 22.1247 | 22.4075   | 19.94   |
| 3.3138        | 0.31  | 160  | 2.3504          | 29.039  | 8.907  | 21.9631 | 22.2506   | 19.91   |
| 3.2678        | 0.39  | 200  | 2.3461          | 29.678  | 9.4429 | 22.3439 | 22.6962   | 19.92   |
| 3.2371        | 0.46  | 240  | 2.3267          | 28.535  | 9.1858 | 21.3721 | 21.6634   | 19.915  |
| 3.204         | 0.54  | 280  | 2.3330          | 29.0796 | 9.4283 | 21.8953 | 22.1867   | 19.885  |
| 3.1881        | 0.62  | 320  | 2.3164          | 29.1456 | 9.1919 | 21.9529 | 22.235    | 19.945  |
| 3.1711        | 0.69  | 360  | 2.3208          | 29.3212 | 9.4823 | 22.1643 | 22.4159   | 19.895  |
| 3.1752        | 0.77  | 400  | 2.3239          | 29.0408 | 9.3615 | 21.8007 | 22.0795   | 19.945  |
| 3.1591        | 0.85  | 440  | 2.3218          | 28.6336 | 9.2799 | 21.5843 | 21.9422   | 19.845  |
| 3.1663        | 0.93  | 480  | 2.3198          | 28.6755 | 9.2869 | 21.9675 | 22.2946   | 19.85   |


### Framework versions

- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2