Llama-Chat-Summary-3.2-3B: Context-Aware Summarization Model
Llama-Chat-Summary-3.2-3B is a fine-tuned model designed for generating context-aware summaries of long conversational or text-based inputs. Built on the meta-llama/Llama-3.2-3B-Instruct foundation, this model is optimized to process structured and unstructured conversational data for summarization tasks.
File Name | Size | Description | Upload Status |
---|---|---|---|
.gitattributes |
1.57 kB | Git LFS tracking configuration. | Uploaded |
README.md |
42 Bytes | Initial commit with minimal documentation. | Uploaded |
config.json |
1.03 kB | Model configuration settings. | Uploaded |
generation_config.json |
248 Bytes | Generation-specific configurations. | Uploaded |
pytorch_model-00001-of-00002.bin |
4.97 GB | Part 1 of the PyTorch model weights. | Uploaded (LFS) |
pytorch_model-00002-of-00002.bin |
1.46 GB | Part 2 of the PyTorch model weights. | Uploaded (LFS) |
pytorch_model.bin.index.json |
21.2 kB | Index file for the model weights. | Uploaded |
special_tokens_map.json |
477 Bytes | Mapping of special tokens for the tokenizer. | Uploaded |
tokenizer.json |
17.2 MB | Pre-trained tokenizer file. | Uploaded (LFS) |
tokenizer_config.json |
57.4 kB | Configuration file for the tokenizer. | Uploaded |
Key Features
Conversation Summarization:
- Generates concise and meaningful summaries of long chats, discussions, or threads.
Context Preservation:
- Maintains critical points, ensuring important details aren't omitted.
Text Summarization:
- Works beyond chats; supports summarizing articles, documents, or reports.
Fine-Tuned Efficiency:
- Trained with Context-Based-Chat-Summary-Plus dataset for accurate summarization of chat and conversational data.
Training Details
- Base Model: meta-llama/Llama-3.2-3B-Instruct
- Fine-Tuning Dataset: prithivMLmods/Context-Based-Chat-Summary-Plus
- Contains 98.4k structured and unstructured conversations, summaries, and contextual inputs for robust training.
Applications
Customer Support Logs:
- Summarize chat logs or support tickets for insights and reporting.
Meeting Notes:
- Generate concise summaries of meeting transcripts.
Document Summarization:
- Create short summaries for lengthy reports or articles.
Content Generation Pipelines:
- Automate summarization for newsletters, blogs, or email digests.
Context Extraction for AI Systems:
- Preprocess chat or conversation logs for downstream AI applications.
Load the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Llama-Chat-Summary-3.2-3B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
Generate a Summary
prompt = """
Summarize the following conversation:
User1: Hey, I need help with my order. It hasn't arrived yet.
User2: I'm sorry to hear that. Can you provide your order number?
User1: Sure, it's 12345.
User2: Let me check... It seems there was a delay. It should arrive tomorrow.
User1: Okay, thank you!
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.7)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Summary:", summary)
Expected Output
"The user reported a delayed order (12345), and support confirmed it will arrive tomorrow."
Deployment Notes
Serverless API:
This model currently lacks sufficient usage for serverless endpoints. Use dedicated endpoints for deployment.Performance Requirements:
- GPU with sufficient memory (recommended for large models).
- Optimization techniques like quantization can improve efficiency for inference.
- Downloads last month
- 131