Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ inference:
|
|
9 |
diversity_penalty: 3.01
|
10 |
no_repeat_ngram_size: 2
|
11 |
temperature: 0.8
|
12 |
-
max_length:
|
13 |
widget:
|
14 |
- text: >-
|
15 |
Learn to build generative AI applications with an expert AWS instructor with the 2-day Developing Generative AI Applications on AWS course.
|
@@ -27,7 +27,9 @@ widget:
|
|
27 |
|
28 |
# Text Rewriter Paraphraser
|
29 |
|
30 |
-
This repository contains a fine-tuned text-rewriting model based on the T5-Base with 223M parameters.
|
|
|
|
|
31 |
|
32 |
## Key Features:
|
33 |
|
@@ -49,13 +51,12 @@ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
|
49 |
# Replace 'YOUR_TOKEN' with your actual Hugging Face access token
|
50 |
tokenizer = AutoTokenizer.from_pretrained("Ateeqq/Text-Rewriter-Paraphraser", token='YOUR_TOKEN')
|
51 |
model = AutoModelForSeq2SeqLM.from_pretrained("Ateeqq/Text-Rewriter-Paraphraser", token='YOUR_TOKEN')
|
52 |
-
|
53 |
-
```python
|
54 |
text = "Data science is a field that deals with extracting knowledge and insights from data. "
|
55 |
|
56 |
inputs = tokenizer(text, return_tensors="pt")
|
57 |
|
58 |
-
output = model.generate(**inputs, max_length=
|
59 |
|
60 |
print(tokenizer.decode(output[0]))
|
61 |
```
|
|
|
9 |
diversity_penalty: 3.01
|
10 |
no_repeat_ngram_size: 2
|
11 |
temperature: 0.8
|
12 |
+
max_length: 64
|
13 |
widget:
|
14 |
- text: >-
|
15 |
Learn to build generative AI applications with an expert AWS instructor with the 2-day Developing Generative AI Applications on AWS course.
|
|
|
27 |
|
28 |
# Text Rewriter Paraphraser
|
29 |
|
30 |
+
This repository contains a fine-tuned text-rewriting model based on the T5-Base with 223M parameters.
|
31 |
+
|
32 |
+
Developed by: https://exnrt.com
|
33 |
|
34 |
## Key Features:
|
35 |
|
|
|
51 |
# Replace 'YOUR_TOKEN' with your actual Hugging Face access token
|
52 |
tokenizer = AutoTokenizer.from_pretrained("Ateeqq/Text-Rewriter-Paraphraser", token='YOUR_TOKEN')
|
53 |
model = AutoModelForSeq2SeqLM.from_pretrained("Ateeqq/Text-Rewriter-Paraphraser", token='YOUR_TOKEN')
|
54 |
+
|
|
|
55 |
text = "Data science is a field that deals with extracting knowledge and insights from data. "
|
56 |
|
57 |
inputs = tokenizer(text, return_tensors="pt")
|
58 |
|
59 |
+
output = model.generate(**inputs, max_length=64)
|
60 |
|
61 |
print(tokenizer.decode(output[0]))
|
62 |
```
|