philschmid HF staff commited on
Commit
8c08893
1 Parent(s): 88a7de2

update readme

Browse files
Files changed (1) hide show
  1. README.md +142 -42
README.md CHANGED
@@ -23,23 +23,9 @@ It achieves the following results on the evaluation set:
23
  - Question: {'precision': 0.8211009174311926, 'recall': 0.8403755868544601, 'f1': 0.8306264501160092, 'number': 1065}
24
  - Overall Precision: 0.7599
25
  - Overall Recall: 0.8083
26
- - Overall F1: 0.7834
27
  - Overall Accuracy: 0.8106
28
 
29
- ## Model description
30
-
31
- More information needed
32
-
33
- ## Intended uses & limitations
34
-
35
- More information needed
36
-
37
- ## Training and evaluation data
38
-
39
- More information needed
40
-
41
- ## Training procedure
42
-
43
  ### Training hyperparameters
44
 
45
  The following hyperparameters were used during training:
@@ -52,30 +38,144 @@ The following hyperparameters were used during training:
52
  - num_epochs: 15
53
  - mixed_precision_training: Native AMP
54
 
55
- ### Training results
56
-
57
- | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
58
- |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
59
- | 0.1724 | 1.0 | 10 | 0.7657 | {'precision': 0.7097826086956521, 'recall': 0.8071693448702101, 'f1': 0.7553499132446501, 'number': 809} | {'precision': 0.3893129770992366, 'recall': 0.42857142857142855, 'f1': 0.40800000000000003, 'number': 119} | {'precision': 0.7941176470588235, 'recall': 0.8366197183098592, 'f1': 0.8148148148148148, 'number': 1065} | 0.7340 | 0.8003 | 0.7657 | 0.8134 |
60
- | 0.1451 | 2.0 | 20 | 0.8099 | {'precision': 0.7136659436008677, 'recall': 0.8133498145859085, 'f1': 0.7602541883304449, 'number': 809} | {'precision': 0.4215686274509804, 'recall': 0.36134453781512604, 'f1': 0.3891402714932127, 'number': 119} | {'precision': 0.809437386569873, 'recall': 0.8375586854460094, 'f1': 0.823257960313798, 'number': 1065} | 0.7493 | 0.7993 | 0.7735 | 0.8125 |
61
- | 0.1179 | 3.0 | 30 | 0.8622 | {'precision': 0.7099892588614393, 'recall': 0.8170580964153276, 'f1': 0.7597701149425288, 'number': 809} | {'precision': 0.4074074074074074, 'recall': 0.46218487394957986, 'f1': 0.4330708661417323, 'number': 119} | {'precision': 0.8123300090661831, 'recall': 0.8413145539906103, 'f1': 0.8265682656826567, 'number': 1065} | 0.7432 | 0.8088 | 0.7746 | 0.8074 |
62
- | 0.0988 | 4.0 | 40 | 0.8587 | {'precision': 0.7141327623126338, 'recall': 0.8244746600741656, 'f1': 0.7653471026965003, 'number': 809} | {'precision': 0.4166666666666667, 'recall': 0.5042016806722689, 'f1': 0.4562737642585551, 'number': 119} | {'precision': 0.8370998116760828, 'recall': 0.8347417840375587, 'f1': 0.8359191349318289, 'number': 1065} | 0.7551 | 0.8108 | 0.7820 | 0.8157 |
63
- | 0.0848 | 5.0 | 50 | 0.8933 | {'precision': 0.7255813953488373, 'recall': 0.7713226205191595, 'f1': 0.7477531455961653, 'number': 809} | {'precision': 0.4024390243902439, 'recall': 0.5546218487394958, 'f1': 0.46643109540636046, 'number': 119} | {'precision': 0.8201834862385321, 'recall': 0.8394366197183099, 'f1': 0.8296983758700696, 'number': 1065} | 0.7493 | 0.7948 | 0.7714 | 0.8056 |
64
- | 0.073 | 6.0 | 60 | 0.9009 | {'precision': 0.7344444444444445, 'recall': 0.8170580964153276, 'f1': 0.7735517846693973, 'number': 809} | {'precision': 0.41721854304635764, 'recall': 0.5294117647058824, 'f1': 0.4666666666666667, 'number': 119} | {'precision': 0.8107370336669699, 'recall': 0.8366197183098592, 'f1': 0.8234750462107209, 'number': 1065} | 0.7512 | 0.8103 | 0.7796 | 0.8123 |
65
- | 0.0655 | 7.0 | 70 | 0.9117 | {'precision': 0.7367231638418079, 'recall': 0.8059332509270705, 'f1': 0.769775678866588, 'number': 809} | {'precision': 0.4357142857142857, 'recall': 0.5126050420168067, 'f1': 0.47104247104247104, 'number': 119} | {'precision': 0.8170955882352942, 'recall': 0.8347417840375587, 'f1': 0.8258244310264746, 'number': 1065} | 0.7582 | 0.8038 | 0.7803 | 0.8088 |
66
- | 0.0599 | 8.0 | 80 | 0.9414 | {'precision': 0.7298474945533769, 'recall': 0.8281829419035847, 'f1': 0.7759119861030689, 'number': 809} | {'precision': 0.41496598639455784, 'recall': 0.5126050420168067, 'f1': 0.4586466165413534, 'number': 119} | {'precision': 0.8100810081008101, 'recall': 0.8450704225352113, 'f1': 0.8272058823529411, 'number': 1065} | 0.7495 | 0.8184 | 0.7824 | 0.8089 |
67
- | 0.0551 | 9.0 | 90 | 0.9548 | {'precision': 0.746031746031746, 'recall': 0.8133498145859085, 'f1': 0.7782377291543465, 'number': 809} | {'precision': 0.42953020134228187, 'recall': 0.5378151260504201, 'f1': 0.47761194029850745, 'number': 119} | {'precision': 0.823963133640553, 'recall': 0.8394366197183099, 'f1': 0.8316279069767442, 'number': 1065} | 0.7637 | 0.8108 | 0.7866 | 0.8111 |
68
- | 0.0483 | 10.0 | 100 | 0.9684 | {'precision': 0.7390326209223848, 'recall': 0.8121137206427689, 'f1': 0.773851590106007, 'number': 809} | {'precision': 0.42, 'recall': 0.5294117647058824, 'f1': 0.46840148698884754, 'number': 119} | {'precision': 0.8232044198895028, 'recall': 0.8394366197183099, 'f1': 0.8312412831241283, 'number': 1065} | 0.7595 | 0.8098 | 0.7839 | 0.8091 |
69
- | 0.0424 | 11.0 | 110 | 0.9858 | {'precision': 0.7392290249433107, 'recall': 0.8059332509270705, 'f1': 0.7711413364872857, 'number': 809} | {'precision': 0.4258064516129032, 'recall': 0.5546218487394958, 'f1': 0.48175182481751827, 'number': 119} | {'precision': 0.8252788104089219, 'recall': 0.8338028169014085, 'f1': 0.8295189163942083, 'number': 1065} | 0.7601 | 0.8058 | 0.7823 | 0.8094 |
70
- | 0.0402 | 12.0 | 120 | 0.9920 | {'precision': 0.7315436241610739, 'recall': 0.8084054388133498, 'f1': 0.7680563711098063, 'number': 809} | {'precision': 0.4460431654676259, 'recall': 0.5210084033613446, 'f1': 0.48062015503875966, 'number': 119} | {'precision': 0.8205128205128205, 'recall': 0.8413145539906103, 'f1': 0.8307834955957348, 'number': 1065} | 0.7586 | 0.8088 | 0.7829 | 0.8111 |
71
- | 0.0392 | 13.0 | 130 | 1.0027 | {'precision': 0.7463193657984145, 'recall': 0.8145859085290482, 'f1': 0.7789598108747045, 'number': 809} | {'precision': 0.4397163120567376, 'recall': 0.5210084033613446, 'f1': 0.47692307692307695, 'number': 119} | {'precision': 0.8216911764705882, 'recall': 0.8394366197183099, 'f1': 0.8304691128657686, 'number': 1065} | 0.7647 | 0.8103 | 0.7868 | 0.8104 |
72
- | 0.0361 | 14.0 | 140 | 1.0027 | {'precision': 0.7421171171171171, 'recall': 0.8145859085290482, 'f1': 0.7766647024160284, 'number': 809} | {'precision': 0.43884892086330934, 'recall': 0.5126050420168067, 'f1': 0.4728682170542636, 'number': 119} | {'precision': 0.8205128205128205, 'recall': 0.8413145539906103, 'f1': 0.8307834955957348, 'number': 1065} | 0.7626 | 0.8108 | 0.7860 | 0.8115 |
73
- | 0.0349 | 15.0 | 150 | 1.0045 | {'precision': 0.7348314606741573, 'recall': 0.8084054388133498, 'f1': 0.7698646262507357, 'number': 809} | {'precision': 0.44285714285714284, 'recall': 0.5210084033613446, 'f1': 0.47876447876447875, 'number': 119} | {'precision': 0.8211009174311926, 'recall': 0.8403755868544601, 'f1': 0.8306264501160092, 'number': 1065} | 0.7599 | 0.8083 | 0.7834 | 0.8106 |
74
-
75
-
76
- ### Framework versions
77
-
78
- - Transformers 4.21.2
79
- - Pytorch 1.11.0+cu113
80
- - Datasets 2.5.1
81
- - Tokenizers 0.12.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  - Question: {'precision': 0.8211009174311926, 'recall': 0.8403755868544601, 'f1': 0.8306264501160092, 'number': 1065}
24
  - Overall Precision: 0.7599
25
  - Overall Recall: 0.8083
26
+ - Overall F1: 0.7866
27
  - Overall Accuracy: 0.8106
28
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ### Training hyperparameters
30
 
31
  The following hyperparameters were used during training:
 
38
  - num_epochs: 15
39
  - mixed_precision_training: Native AMP
40
 
41
+ ## Deploy Model with Inference Endpoints
42
+
43
+ Before we can get started, make sure you meet all of the following requirements:
44
+
45
+ 1. An Organization/User with an active plan and *WRITE* access to the model repository.
46
+ 2. Can access the UI: [https://ui.endpoints.huggingface.co](https://ui.endpoints.huggingface.co/endpoints)
47
+
48
+
49
+
50
+ ### 1. Deploy LayoutLM and Send requests
51
+
52
+ In this tutorial, you will learn how to deploy a [LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm) to [Hugging Face Inference Endpoints](https://huggingface.co/inference-endpoints) and how you can integrate it via an API into your products.
53
+
54
+ This tutorial is not covering how you create the custom handler for inference. If you want to learn how to create a custom Handler for Inference Endpoints, you can either checkout the [documentation](https://huggingface.co/docs/inference-endpoints/guides/custom_handler) or go through [“Custom Inference with Hugging Face Inference Endpoints”](https://www.philschmid.de/custom-inference-handler)
55
+
56
+ We are going to deploy [philschmid/layoutlm-funsd](https://huggingface.co/philschmid/layoutlm-funsd) which implements the following `handler.py`
57
+
58
+ ```python
59
+ from typing import Dict, List, Any
60
+ from transformers import LayoutLMForTokenClassification, LayoutLMv2Processor
61
+ import torch
62
+ from subprocess import run
63
+
64
+ # install tesseract-ocr and pytesseract
65
+ run("apt install -y tesseract-ocr", shell=True, check=True)
66
+ run("pip install pytesseract", shell=True, check=True)
67
+
68
+ # helper function to unnormalize bboxes for drawing onto the image
69
+ def unnormalize_box(bbox, width, height):
70
+ return [
71
+ width * (bbox[0] / 1000),
72
+ height * (bbox[1] / 1000),
73
+ width * (bbox[2] / 1000),
74
+ height * (bbox[3] / 1000),
75
+ ]
76
+
77
+ # set device
78
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
79
+
80
+ class EndpointHandler:
81
+ def __init__(self, path=""):
82
+ # load model and processor from path
83
+ self.model = LayoutLMForTokenClassification.from_pretrained(path).to(device)
84
+ self.processor = LayoutLMv2Processor.from_pretrained(path)
85
+
86
+ def __call__(self, data: Dict[str, bytes]) -> Dict[str, List[Any]]:
87
+ """
88
+ Args:
89
+ data (:obj:):
90
+ includes the deserialized image file as PIL.Image
91
+ """
92
+ # process input
93
+ image = data.pop("inputs", data)
94
+
95
+ # process image
96
+ encoding = self.processor(image, return_tensors="pt")
97
+
98
+ # run prediction
99
+ with torch.inference_mode():
100
+ outputs = self.model(
101
+ input_ids=encoding.input_ids.to(device),
102
+ bbox=encoding.bbox.to(device),
103
+ attention_mask=encoding.attention_mask.to(device),
104
+ token_type_ids=encoding.token_type_ids.to(device),
105
+ )
106
+ predictions = outputs.logits.softmax(-1)
107
+
108
+ # post process output
109
+ result = []
110
+ for item, inp_ids, bbox in zip(
111
+ predictions.squeeze(0).cpu(), encoding.input_ids.squeeze(0).cpu(), encoding.bbox.squeeze(0).cpu()
112
+ ):
113
+ label = self.model.config.id2label[int(item.argmax().cpu())]
114
+ if label == "O":
115
+ continue
116
+ score = item.max().item()
117
+ text = self.processor.tokenizer.decode(inp_ids)
118
+ bbox = unnormalize_box(bbox.tolist(), image.width, image.height)
119
+ result.append({"label": label, "score": score, "text": text, "bbox": bbox})
120
+ return {"predictions": result}
121
+ ```
122
+
123
+ ### 2. Send HTTP request using Python
124
+
125
+ Hugging Face Inference endpoints can directly work with binary data, this means that we can directly send our image from our document to the endpoint. We are going to use `requests` to send our requests. (make your you have it installed `pip install requests`)
126
+
127
+ ```python
128
+ import json
129
+ import requests as r
130
+ import mimetypes
131
+
132
+ ENDPOINT_URL="" # url of your endpoint
133
+ HF_TOKEN="" # organization token where you deployed your endpoint
134
+
135
+ def predict(path_to_image:str=None):
136
+ with open(path_to_image, "rb") as i:
137
+ b = i.read()
138
+ headers= {
139
+ "Authorization": f"Bearer {HF_TOKEN}",
140
+ "Content-Type": mimetypes.guess_type(path_to_image)[0]
141
+ }
142
+ response = r.post(ENDPOINT_URL, headers=headers, data=b)
143
+ return response.json()
144
+
145
+ prediction = predict(path_to_image="path_to_your_image.png")
146
+
147
+ print(prediction)
148
+ # {'predictions': [{'label': 'I-ANSWER', 'score': 0.4823932945728302, 'text': '[CLS]', 'bbox': [0.0, 0.0, 0.0, 0.0]}, {'label': 'B-HEADER', 'score': 0.992474377155304, 'text': 'your', 'bbox': [1712.529, 181.203, 1859.949, 228.88799999999998]},
149
+ ```
150
+
151
+
152
+ ### 3. Draw result on image
153
+
154
+ To get a better understanding of what the model predicted you can also draw the predictions on the provided image.
155
+
156
+ ```python
157
+ from PIL import Image, ImageDraw, ImageFont
158
+
159
+ # draw results on image
160
+ def draw_result(path_to_image,result):
161
+ image = Image.open(path_to_image)
162
+ label2color = {
163
+ "B-HEADER": "blue",
164
+ "B-QUESTION": "red",
165
+ "B-ANSWER": "green",
166
+ "I-HEADER": "blue",
167
+ "I-QUESTION": "red",
168
+ "I-ANSWER": "green",
169
+ }
170
+
171
+ # draw predictions over the image
172
+ draw = ImageDraw.Draw(image)
173
+ font = ImageFont.load_default()
174
+ for res in result:
175
+ draw.rectangle(res["bbox"], outline="black")
176
+ draw.rectangle(res["bbox"], outline=label2color[res["label"]])
177
+ draw.text((res["bbox"][0] + 10, res["bbox"][1] - 10), text=res["label"], fill=label2color[res["label"]], font=font)
178
+ return image
179
+
180
+ draw_result("path_to_your_image.png", prediction["predictions"])
181
+ ```