gauneg commited on
Commit
724e84b
1 Parent(s): c8bc735

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -1
README.md CHANGED
@@ -7,4 +7,114 @@ library_name: transformers
7
  tags:
8
  - LoRA
9
  - Adapter
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  tags:
8
  - LoRA
9
  - Adapter
10
+ ---
11
+
12
+ # Training
13
+ This model adapter is designed for token classification tasks, enabling it to extract aspect terms and predict the sentiment polarity associated with the extracted aspect terms.
14
+ The extracted aspect terms will be the span(s) from the input text on which a sentiment is being expressed.
15
+ It has been created using [PEFT](https://huggingface.co/docs/peft/index) framework for [LoRA:Low-Rank Adaptation](https://arxiv.org/abs/2106.09685).
16
+
17
+ ## Datasets
18
+ This model has been trained on the following datasets:
19
+
20
+ 1. Aspect Based Sentiment Analysis SemEval Shared Tasks ([2014](https://aclanthology.org/S14-2004/), [2015](https://aclanthology.org/S15-2082/), [2016](https://aclanthology.org/S16-1002/))
21
+ 2. Multi-Aspect Multi-Sentiment [MAMS](https://aclanthology.org/D19-1654/)
22
+
23
+ # Use
24
+
25
+ * Loading the base model and merging it with LoRA parameters
26
+
27
+ ```python
28
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
29
+ from peft import PeftModel
30
+
31
+ # preparing the labels
32
+ labels = {"B-neu": 1, "I-neu": 2, "O": 0, "B-neg": 3, "B-con": 4, "I-pos": 5, "B-pos": 6, "I-con": 7, "I-neg": 8, "X": -100}
33
+ id2labels = {k:lab for lab, k in labels.items()}
34
+ labels2ids = {k:lab for lab, k in id2labels.items()}
35
+
36
+ # loading tokenizer and base_model
37
+ base_id = 'FacebookAI/roberta-large'
38
+ tokenizer = AutoTokenizer.from_pretrained(base_id)
39
+ base_model = AutoModelForTokenClassification.from_pretrained(base_id, num_labels=len(labels), id2label=id2labels, label2id=labels2ids)
40
+
41
+ # using this adapter with base model
42
+ model = PeftModel.from_pretrained(base_model, 'gauneg/roberta-large-absa-ate-sentiment-lora-adapter', is_trainable=False)
43
+
44
+ ```
45
+
46
+ This model can be utilized in the following two methods:
47
+ 1. Making token level inference
48
+ 2. Using pipelines for end to end inference
49
+
50
+ ## Making token level inference
51
+
52
+ ```python
53
+ # after loading base model and the adapter as shown in the previous snippet
54
+
55
+ text_input = "Been here a few times and food has always been good but service really suffers when it gets crowded."
56
+ tok_inputs = tokenizer(text_input, return_tensors="pt").to(device)
57
+
58
+ y_pred = model(**tok_inputs) # predicting the logits
59
+
60
+ y_pred_fin = y_pred.logits.argmax(dim=-1)[0] # selecting the most favoured labels for each token from the logits
61
+
62
+ decoded_pred = [id2labels[logx.item()] for logx in y_pred_fin]
63
+
64
+ tok_levl_pred = list(zip(tokenizer.convert_ids_to_tokens(tok_inputs['input_ids'][0]), decoded_pred))[1:-1]
65
+ ```
66
+
67
+ RESULTS in `tok_levl_pred` variable:
68
+ ```bash
69
+ [('Be', 'O'),
70
+ ('en', 'O'),
71
+ ('Ġhere', 'O'),
72
+ ('Ġa', 'O'),
73
+ ('Ġfew', 'O'),
74
+ ('Ġtimes', 'O'),
75
+ ('Ġand', 'O'),
76
+ ('Ġfood', 'B-pos'),
77
+ ('Ġhas', 'O'),
78
+ ('Ġalways', 'O'),
79
+ ('Ġbeen', 'O'),
80
+ ('Ġgood', 'O'),
81
+ ('Ġbut', 'O'),
82
+ ('Ġservice', 'B-neg'),
83
+ ('Ġreally', 'O'),
84
+ ('Ġsuffers', 'O'),
85
+ ('Ġwhen', 'O'),
86
+ ('Ġit', 'O'),
87
+ ('Ġgets', 'O'),
88
+ ('Ġcrowded', 'O'),
89
+ ('.', 'O')]
90
+ ```
91
+
92
+ ## Using end-to-end token classification pipeline
93
+
94
+ ```python
95
+ # after loading base model and the adapter as shown in the previous snippet
96
+ from transformers import pipeline
97
+
98
+ ate_senti_pipeline = pipeline(task='ner',
99
+ aggregation_strategy='simple',
100
+ model=model,
101
+ tokenizer=tokenizer)
102
+
103
+
104
+ text_input = "Been here a few times and food has always been good but service really suffers when it gets crowded."
105
+ ate_senti_pipeline(text_input)
106
+
107
+ ```
108
+ OUTPUT
109
+ ```bash
110
+ [{'entity_group': 'pos',
111
+ 'score': 0.92310727,
112
+ 'word': ' food',
113
+ 'start': 26,
114
+ 'end': 30},
115
+ {'entity_group': 'neg',
116
+ 'score': 0.90695626,
117
+ 'word': ' service',
118
+ 'start': 56,
119
+ 'end': 63}]
120
+ ```