Update README.md
Browse files
README.md
CHANGED
@@ -40,4 +40,21 @@ The following hyperparameters were used during training:
|
|
40 |
- Tokenizers 0.13.3
|
41 |
|
42 |
### Machine Used and time taken
|
43 |
-
- RTX 3090: 8 hrs. 35 mins.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
- Tokenizers 0.13.3
|
41 |
|
42 |
### Machine Used and time taken
|
43 |
+
- RTX 3090: 8 hrs. 35 mins.
|
44 |
+
|
45 |
+
```python
|
46 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
47 |
+
|
48 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("shorthillsai/flan-t5-large-absa", device_map="auto")
|
49 |
+
tokenizer = AutoTokenizer.from_pretrained("shorthillsai/flan-t5-large-absa", truncation=True)
|
50 |
+
|
51 |
+
prompt = """Find the aspect based sentiment for the given review. 'Not present' if the aspect is absent.\n\nReview:I love the screen of this laptop and the battery life is amazing.\n\nAspect:Battery Life\n\nSentiment: """
|
52 |
+
|
53 |
+
input_ids = tokenizer(prompt, return_tensors="pt").to("cuda").input_ids
|
54 |
+
instruct_model_outputs = instruct_model.generate(input_ids=input_ids)
|
55 |
+
instruct_model_text_output = tokenizer.decode(instruct_model_outputs[0], skip_special_tokens=True)
|
56 |
+
```
|
57 |
+
|
58 |
+
You can then use the pipeline to answer instructions:
|
59 |
+
|
60 |
+
```python
|