File size: 11,567 Bytes
e25f270
 
 
3e5b7c3
 
 
 
 
e25f270
0a1f547
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e25f270
0a1f547
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f23f112
0a1f547
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f23f112
0a1f547
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
---
language:
- en
widget:
- text: "In fiscal year 2019, we reduced our comprehensive carbon footprint for the fourth consecutive year—down 35 percent compared to 2015, when Apple’s carbon emissions peaked, even as net revenue increased by 11 percent over that same period. In the past year, we avoided over 10 million metric tons from our emissions reduction initiatives—like our Supplier Clean Energy Program, which lowered our footprint by 4.4 million metric tons. "
  example_title: "Reduced carbon footprint"
- text: "We believe it is essential to establish validated conflict-free sources of 3TG within the Democratic Republic of the Congo (the “DRC”) and adjoining countries (together, with the DRC, the “Covered Countries”), so that these minerals can be procured in a way that contributes to economic growth and development in the region. To aid in this effort, we have established a conflict minerals policy and an internal team to implement the policy."
  example_title: "Conflict minerals policy"
---
# Model Card for ESG-BERT
Domain Specific BERT Model for Text Mining in Sustainable Investing
 
 
 
# Model Details
 
## Model Description
 
 
 
- **Developed by:** [Charan Pothireddi](https://www.linkedin.com/in/sree-charan-pothireddi-6a0a3587/) and [Parabole.ai](https://www.linkedin.com/in/sree-charan-pothireddi-6a0a3587/)
- **Shared by [Optional]:** HuggingFace
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** More information needed
- **Related Models:** 
  - **Parent Model:** BERT
- **Resources for more information:** 
 - [GitHub Repo](https://github.com/mukut03/ESG-BERT)
 - [Blog Post](https://towardsdatascience.com/nlp-meets-sustainable-investing-d0542b3c264b?source=friends_link&sk=1f7e6641c3378aaff319a81decf387bf)
 
# Uses
 
 
## Direct Use
 
Text Mining in Sustainable Investing
 
## Downstream Use [Optional]
 
The applications of ESG-BERT can be expanded way beyond just text classification. It can be fine-tuned to perform various other downstream NLP tasks in the domain of Sustainable Investing.
 
## Out-of-Scope Use
 
The model should not be used to intentionally create hostile or alienating environments for people. 
# Bias, Risks, and Limitations
 
 
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
 
 
## Recommendations
 
 
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
 
 
# Training Details
 
## Training Data
 
More information needed
 
## Training Procedure
 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
### Preprocessing
 
More information needed
 
### Speeds, Sizes, Times
 
More information needed
 
# Evaluation
 
 
 
## Testing Data, Factors & Metrics
 
### Testing Data
 
The fine-tuned model for text classification is also available [here](https://drive.google.com/drive/folders/1Qz4HP3xkjLfJ6DGCFNeJ7GmcPq65_HVe?usp=sharing). It can be used directly to make predictions using just a few steps.  First, download the fine-tuned pytorch_model.bin, config.json, and vocab.txt
 
### Factors
 
More information needed
 
### Metrics
 
More information needed
 
## Results 
 
ESG-BERT was further trained on unstructured text data with accuracies of 100% and 98% for Next Sentence Prediction and Masked Language Modelling tasks. Fine-tuning ESG-BERT for text classification yielded an F-1 score of 0.90. For comparison, the general BERT (BERT-base) model scored 0.79 after fine-tuning, and the sci-kit learn approach scored 0.67.
 
# Model Examination
 
More information needed
 
# Environmental Impact
 
 
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
 
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:**  information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
 
# Technical Specifications [optional]
 
## Model Architecture and Objective
 
More information needed
 
## Compute Infrastructure
 
More information needed
 
### Hardware
 
More information needed
 
### Software
 
JDK 11 is needed to serve the model
 
# Citation
 
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
**BibTeX:**
 
More information needed
 
**APA:**
 
More information needed
 
# Glossary [optional]
 
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
 
More information needed
 
# More Information [optional]
 
More information needed
 
# Model Card Authors [optional]
[Charan Pothireddi](https://www.linkedin.com/in/sree-charan-pothireddi-6a0a3587/) and [Parabole.ai](https://www.linkedin.com/in/sree-charan-pothireddi-6a0a3587/), in collaboration with the Ezi Ozoani and the HuggingFace Team
 
 
# Model Card Contact
 
More information needed
 
# How to Get Started with the Model
 
Use the code below to get started with the model.
 
<details>
 <summary> Click to expand </summary>
 
```
pip install torchserve torch-model-archiver
 
pip install torchvision
 
pip install transformers
 
```
 
Next up, we'll set up the handler script. It is a basic handler for text classification that can be improved upon. Save this script as "handler.py" in your directory. [1]
 
```
 
from abc import ABC
 
import json
 
import logging
 
import os
 
import torch
 
from transformers import AutoModelForSequenceClassification, AutoTokenizer
 
from ts.torch_handler.base_handler import BaseHandler
 
logger = logging.getLogger(__name__)
 
class TransformersClassifierHandler(BaseHandler, ABC):
 
   """
 
   Transformers text classifier handler class. This handler takes a text (string) and
 
   as input and returns the classification text based on the serialized transformers checkpoint.
 
   """
 
   def __init__(self):
 
       super(TransformersClassifierHandler, self).__init__()
 
       self.initialized = False
 
def initialize(self, ctx):
 
       self.manifest = ctx.manifest
 
properties = ctx.system_properties
 
       model_dir = properties.get("model_dir")
 
       self.device = torch.device("cuda:" + str(properties.get("gpu_id")) if torch.cuda.is_available() else "cpu")
 
# Read model serialize/pt file
 
       self.model = AutoModelForSequenceClassification.from_pretrained(model_dir)
 
       self.tokenizer = AutoTokenizer.from_pretrained(model_dir)
 
self.model.to(self.device)
 
       self.model.eval()
 
logger.debug('Transformer model from path {0} loaded successfully'.format(model_dir))
 
# Read the mapping file, index to object name
 
       mapping_file_path = os.path.join(model_dir, "index_to_name.json")
 
if os.path.isfile(mapping_file_path):
 
           with open(mapping_file_path) as f:
 
               self.mapping = json.load(f)
 
       else:
 
           logger.warning('Missing the index_to_name.json file. Inference output will not include class name.')
 
self.initialized = True
 
def preprocess(self, data):
 
       """ Very basic preprocessing code - only tokenizes.
 
           Extend with your own preprocessing steps as needed.
 
       """
 
       text = data[0].get("data")
 
       if text is None:
 
           text = data[0].get("body")
 
       sentences = text.decode('utf-8')
 
       logger.info("Received text: '%s'", sentences)
 
inputs = self.tokenizer.encode_plus(
 
           sentences,
 
           add_special_tokens=True,
 
           return_tensors="pt"
 
       )
 
       return inputs
 
def inference(self, inputs):
 
       """
 
       Predict the class of a text using a trained transformer model.
 
       """
 
       # NOTE: This makes the assumption that your model expects text to be tokenized 
 
       # with "input_ids" and "token_type_ids" - which is true for some popular transformer models, e.g. bert.
 
       # If your transformer model expects different tokenization, adapt this code to suit
 
       # its expected input format.
 
       prediction = self.model(
 
           inputs['input_ids'].to(self.device),
 
           token_type_ids=inputs['token_type_ids'].to(self.device)
 
       )[0].argmax().item()
 
       logger.info("Model predicted: '%s'", prediction)
 
if self.mapping:
 
           prediction = self.mapping[str(prediction)]
 
return [prediction]
 
def postprocess(self, inference_output):
 
       # TODO: Add any needed post-processing of the model predictions here
 
       return inference_output
 
_service = TransformersClassifierHandler()
 
def handle(data, context):
 
   try:
 
       if not _service.initialized:
 
           _service.initialize(context)
 
if data is None:
 
           return None
 
data = _service.preprocess(data)
 
       data = _service.inference(data)
 
       data = _service.postprocess(data)
 
return data
 
   except Exception as e:
 
       raise e
 
 
 
```
 
TorcheServe uses a format called MAR (Model Archive). We can convert our PyTorch model to a .mar file using this command:
 
```
 
torch-model-archiver --model-name "bert" --version 1.0 --serialized-file ./bert_model/pytorch_model.bin --extra-files "./bert_model/config.json,./bert_model/vocab.txt" --handler "./handler.py"
 
```
 
Move the .mar file into a new directory: 
 
```
 
mkdir model_store && mv bert.mar model_store
 
```
 
Finally, we can start TorchServe using the command: 
 
```
 
torchserve --start --model-store model_store --models bert=bert.mar
 
```
 
We can now query the model from another terminal window using the Inference API. We pass a text file containing text that the model will try to classify. 
 

 
 
```
 
curl -X POST http://127.0.0.1:8080/predictions/bert -T predict.txt
 
```
 
This returns a label number which correlates to a textual label. This is stored in the label_dict.txt dictionary file. 
 
```
 
__label__Business_Ethics :  0
 
__label__Data_Security :  1
 
__label__Access_And_Affordability :  2
 
__label__Business_Model_Resilience :  3
 
__label__Competitive_Behavior :  4
 
__label__Critical_Incident_Risk_Management :  5
 
__label__Customer_Welfare :  6
 
__label__Director_Removal :  7
 
__label__Employee_Engagement_Inclusion_And_Diversity :  8
 
__label__Employee_Health_And_Safety :  9
 
__label__Human_Rights_And_Community_Relations :  10
 
__label__Labor_Practices :  11
 
__label__Management_Of_Legal_And_Regulatory_Framework :  12
 
__label__Physical_Impacts_Of_Climate_Change :  13
 
__label__Product_Quality_And_Safety :  14
 
__label__Product_Design_And_Lifecycle_Management :  15
 
__label__Selling_Practices_And_Product_Labeling :  16
 
__label__Supply_Chain_Management :  17
 
__label__Systemic_Risk_Management :  18
 
__label__Waste_And_Hazardous_Materials_Management :  19
 
__label__Water_And_Wastewater_Management :  20
 
__label__Air_Quality :  21
 
__label__Customer_Privacy :  22
 
__label__Ecological_Impacts :  23
 
__label__Energy_Management :  24
 
__label__GHG_Emissions :  25
 
```

<\details>