Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,79 @@
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
-
|
5 |
|
6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
|
9 |
-
- PEFT 0.4.0
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
+
pipeline_tag: summarization
|
4 |
+
tags:
|
5 |
+
- transformers
|
6 |
+
- summarization
|
7 |
+
- dialogue-summarization
|
8 |
+
- LoRA
|
9 |
+
- PEFT
|
10 |
+
datasets:
|
11 |
+
- knkarthick/dialogsum
|
12 |
---
|
13 |
+
# ConvoBrief: LoRA-enhanced BART Model for Dialogue Summarization
|
14 |
|
15 |
+
This model is a variant of the 'facebook/bart-large-cnn' model, optimized with LoRA (Local Relational Aggregation) for dialogue summarization tasks. LoRA enhances feature aggregation across different positions in the sequence, making it particularly effective for capturing the nuances of dialogues.
|
16 |
+
|
17 |
+
## LoRA Configuration:
|
18 |
+
|
19 |
+
* r: 8 (Number of attention heads in LoRA)
|
20 |
+
* lora_alpha: 8 (Scaling factor for LoRA attention)
|
21 |
+
* target_modules: ["q_proj", "v_proj"] (Modules targeted for LoRA, enhancing query and value projections)
|
22 |
+
* lora_dropout: 0.05 (Dropout rate for LoRA)
|
23 |
+
* bias: "lora_only" (Bias setting for LoRA)
|
24 |
+
* task_type: Dialogue Summarization (SEQ_2_SEQ_LM)
|
25 |
+
|
26 |
+
This model has been fine-tuned using the PEFT (Parameter-Efficient Fine-Tuning) approach, striking a balance between dialogue summarization objectives for optimal performance.
|
27 |
+
## Usage:
|
28 |
+
|
29 |
+
Deploy this LoRA-enhanced BART model for dialogue summarization tasks, where it excels in distilling meaningful summaries from conversational text. Capture the richness of dialogues and generate concise yet informative summaries using the enhanced contextual understanding provided by LoRA.
|
30 |
+
```python
|
31 |
+
from peft import PeftModel, PeftConfig
|
32 |
+
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
33 |
+
from transformers import pipeline
|
34 |
+
|
35 |
+
# Load PeftConfig and base model
|
36 |
+
config = PeftConfig.from_pretrained("Ketan3101/ConvoBrief")
|
37 |
+
base_model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-cnn")
|
38 |
+
|
39 |
+
# Load PeftModel
|
40 |
+
model = PeftModel.from_pretrained(base_model, "Ketan3101/ConvoBrief")
|
41 |
+
|
42 |
+
# Load tokenizer
|
43 |
+
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
|
44 |
|
45 |
+
# Define a pipeline for dialogue summarization
|
46 |
+
summarization_pipeline = pipeline(
|
47 |
+
"summarization",
|
48 |
+
model=model,
|
49 |
+
tokenizer=tokenizer
|
50 |
+
)
|
51 |
+
|
52 |
+
# Example dialogue for summarization
|
53 |
+
dialogue = [
|
54 |
+
#Person1#: Happy Birthday, this is for you, Brian.
|
55 |
+
#Person2#: I'm so happy you remember, please come in and enjoy the party. Everyone's here, I'm sure you have a good time.
|
56 |
+
#Person1#: Brian, may I have a pleasure to have a dance with you?
|
57 |
+
#Person2#: Ok.
|
58 |
+
#Person1#: This is really wonderful party.
|
59 |
+
#Person2#: Yes, you are always popular with everyone. and you look very pretty today.
|
60 |
+
#Person1#: Thanks, that's very kind of you to say. I hope my necklace goes with my dress, and they both make me look good I feel.
|
61 |
+
#Person2#: You look great, you are absolutely glowing.
|
62 |
+
#Person1#: Thanks, this is a fine party. We should have a drink together to celebrate your birthday
|
63 |
+
]
|
64 |
+
|
65 |
+
# Combine dialogue into a single string
|
66 |
+
full_dialogue = " ".join(dialogue)
|
67 |
+
|
68 |
+
# Generate summary
|
69 |
+
summary = summarization_pipeline(full_dialogue, max_length=150, min_length=40, do_sample=True)
|
70 |
+
|
71 |
+
print("Original Dialogue:\n", full_dialogue)
|
72 |
+
print("Generated Summary:\n", summary[0]['summary_text'])
|
73 |
+
```
|
74 |
+
|
75 |
+
Feel free to customize and expand upon this description and usage example to provide additional context and details about your LoRA-enhanced BART model and how users can effectively use it for dialogue summarization tasks.
|
76 |
+
|
77 |
+
### Framework versions
|
78 |
|
79 |
+
- PEFT 0.4.0
|