Omaratef3221
commited on
Commit
•
de5d206
1
Parent(s):
cf2ba17
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
tags:
|
5 |
+
- dialogue
|
6 |
+
- conversation-generator
|
7 |
+
- flan-t5-base
|
8 |
+
- fine-tuned
|
9 |
+
license: apache-2.0
|
10 |
+
datasets:
|
11 |
+
- kaggle-dialogue-dataset
|
12 |
+
---
|
13 |
+
|
14 |
+
# Omaratef3221/flan-t5-base-dialogue-generator
|
15 |
+
|
16 |
+
## Model Description
|
17 |
+
This model is a fine-tuned version of Google's `t5` specifically tailored for generating realistic and engaging dialogues or conversations.
|
18 |
+
It has been trained to capture the nuances of human conversation, making it highly effective for applications requiring conversational AI capabilities.
|
19 |
+
|
20 |
+
## Intended Use
|
21 |
+
`Omaratef3221/flan-t5-base-dialogue-generator` is ideal for developing chatbots, virtual assistants, and other applications where generating human-like dialogue is crucial.
|
22 |
+
It can also be used for research and development in natural language understanding and generation.
|
23 |
+
|
24 |
+
## How to Use
|
25 |
+
You can use this model directly with the transformers library as follows:
|
26 |
+
|
27 |
+
```python
|
28 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
29 |
+
|
30 |
+
model_name = "Omaratef3221/flan-t5-base-dialogue-generator"
|
31 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
32 |
+
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
|
33 |
+
|
34 |
+
prompt = '''
|
35 |
+
Generate a dialogue between two people about the following topic:
|
36 |
+
A local street market bustles with activity, #Person1# tries exotic food for the first time, and #Person2#, familiar with the cuisine, offers insights and recommendations.
|
37 |
+
Dialogue:
|
38 |
+
'''
|
39 |
+
# Generate a response to an input statement
|
40 |
+
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
|
41 |
+
output = model.generate(input_ids, top_p = 0.6, do_sample=True, temperature = 1.2, max_length = 512)
|
42 |
+
print(tokenizer.decode(output[0], skip_special_tokens=True).replace('. ', '.\n'))
|
43 |
+
|