krinal commited on
Commit
cbf3687
1 Parent(s): 505254e

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -24
README.md CHANGED
@@ -1,36 +1,75 @@
1
-
2
  ---
3
- license: apache-2.0
4
- library_name: span-marker
5
  tags:
6
- - span-marker
7
- - token-classification
8
- - ner
9
- - named-entity-recognition
10
- pipeline_tag: token-classification
 
11
  ---
12
 
13
- # SpanMarker for Named Entity Recognition
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition. In particular, this SpanMarker model uses [roberta-base](https://huggingface.co/roberta-base) as the underlying encoder.
16
 
17
- ## Usage
18
 
19
- To use this model for inference, first install the `span_marker` library:
 
 
 
 
 
 
 
 
 
 
20
 
21
- ```bash
22
- pip install span_marker
23
- ```
24
 
25
- You can then run inference with this model like so:
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
- ```python
28
- from span_marker import SpanMarkerModel
29
 
30
- # Download from the 🤗 Hub
31
- model = SpanMarkerModel.from_pretrained("span_marker_model_name")
32
- # Run inference
33
- entities = model.predict("Amelia Earhart flew her single engine Lockheed Vega 5B across the Atlantic to Paris.")
34
- ```
35
 
36
- See the [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) repository for documentation and additional information on this library.
 
 
 
 
 
1
  ---
 
 
2
  tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - few-nerd
6
+ model-index:
7
+ - name: span-marker-robert-base
8
+ results: []
9
  ---
10
 
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # span-marker-robert-base
15
+
16
+ This model is a fine-tuned version of [](https://huggingface.co/) on the few-nerd dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.0214
19
+ - Overall Precision: 0.7642
20
+ - Overall Recall: 0.7947
21
+ - Overall F1: 0.7791
22
+ - Overall Accuracy: 0.9397
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
 
36
+ ## Training procedure
37
 
38
+ ### Training hyperparameters
39
 
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 5e-05
42
+ - train_batch_size: 4
43
+ - eval_batch_size: 4
44
+ - seed: 42
45
+ - gradient_accumulation_steps: 2
46
+ - total_train_batch_size: 8
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_ratio: 0.1
50
+ - num_epochs: 1
51
 
52
+ ### Training results
 
 
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
55
+ |:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|
56
+ | 0.0214 | 0.08 | 100 | 0.0219 | 0.7641 | 0.7679 | 0.7660 | 0.9330 |
57
+ | 0.0199 | 0.16 | 200 | 0.0243 | 0.7442 | 0.7679 | 0.7559 | 0.9348 |
58
+ | 0.0179 | 0.24 | 300 | 0.0212 | 0.7730 | 0.7580 | 0.7654 | 0.9361 |
59
+ | 0.0188 | 0.33 | 400 | 0.0225 | 0.7616 | 0.7710 | 0.7662 | 0.9343 |
60
+ | 0.0149 | 0.41 | 500 | 0.0240 | 0.7537 | 0.7783 | 0.7658 | 0.9375 |
61
+ | 0.015 | 0.49 | 600 | 0.0230 | 0.7540 | 0.7829 | 0.7682 | 0.9362 |
62
+ | 0.0137 | 0.57 | 700 | 0.0232 | 0.7746 | 0.7538 | 0.7640 | 0.9319 |
63
+ | 0.0123 | 0.65 | 800 | 0.0218 | 0.7651 | 0.7879 | 0.7763 | 0.9393 |
64
+ | 0.0103 | 0.73 | 900 | 0.0223 | 0.7688 | 0.7964 | 0.7824 | 0.9397 |
65
+ | 0.0108 | 0.82 | 1000 | 0.0209 | 0.7763 | 0.7816 | 0.7789 | 0.9397 |
66
+ | 0.0116 | 0.9 | 1100 | 0.0213 | 0.7743 | 0.7879 | 0.7811 | 0.9398 |
67
+ | 0.0119 | 0.98 | 1200 | 0.0214 | 0.7653 | 0.7947 | 0.7797 | 0.9400 |
68
 
 
 
69
 
70
+ ### Framework versions
 
 
 
 
71
 
72
+ - Transformers 4.30.2
73
+ - Pytorch 2.0.1+cu118
74
+ - Datasets 2.13.1
75
+ - Tokenizers 0.13.3