Triangle104's picture
Update README.md
8ab1865 verified
---
license: creativeml-openrail-m
datasets:
- prithivMLmods/Context-Based-Chat-Summary-Plus
language:
- en
base_model: prithivMLmods/Llama-Chat-Summary-3.2-3B
pipeline_tag: text-generation
library_name: transformers
tags:
- safetensors
- chat-summary
- 3B
- Ollama
- text-generation-inference
- trl
- Llama3.2
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama-Chat-Summary-3.2-3B-Q4_K_M-GGUF
This model was converted to GGUF format from [`prithivMLmods/Llama-Chat-Summary-3.2-3B`](https://huggingface.co/prithivMLmods/Llama-Chat-Summary-3.2-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/prithivMLmods/Llama-Chat-Summary-3.2-3B) for more details on the model.
---
Model details:
-
Llama-Chat-Summary-3.2-3B: Context-Aware Summarization Model
Llama-Chat-Summary-3.2-3B is a fine-tuned model designed for generating context-aware summaries of long conversational or text-based inputs. Built on the meta-llama/Llama-3.2-3B-Instruct foundation, this model is optimized to process structured and unstructured conversational data for summarization tasks.
Key Features
Conversation Summarization:
Generates concise and meaningful summaries of long chats, discussions, or threads.
Context Preservation:
Maintains critical points, ensuring important details aren't omitted.
Text Summarization:
Works beyond chats; supports summarizing articles, documents, or reports.
Fine-Tuned Efficiency:
Trained with Context-Based-Chat-Summary-Plus dataset for accurate summarization of chat and conversational data.
Training Details
Base Model: meta-llama/Llama-3.2-3B-Instruct
Fine-Tuning Dataset: prithivMLmods/Context-Based-Chat-Summary-Plus
Contains 98.4k structured and unstructured conversations, summaries, and contextual inputs for robust training.
Applications
Customer Support Logs:
Summarize chat logs or support tickets for insights and reporting.
Meeting Notes:
Generate concise summaries of meeting transcripts.
Document Summarization:
Create short summaries for lengthy reports or articles.
Content Generation Pipelines:
Automate summarization for newsletters, blogs, or email digests.
Context Extraction for AI Systems:
Preprocess chat or conversation logs for downstream AI applications.
Load the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Llama-Chat-Summary-3.2-3B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
Generate a Summary
prompt = """
Summarize the following conversation:
User1: Hey, I need help with my order. It hasn't arrived yet.
User2: I'm sorry to hear that. Can you provide your order number?
User1: Sure, it's 12345.
User2: Let me check... It seems there was a delay. It should arrive tomorrow.
User1: Okay, thank you!
"""
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.7)
summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Summary:", summary)
Expected Output
"The user reported a delayed order (12345), and support confirmed it will arrive tomorrow."
Deployment Notes
Serverless API:
This model currently lacks sufficient usage for serverless endpoints. Use dedicated endpoints for deployment.
Performance Requirements:
GPU with sufficient memory (recommended for large models).
Optimization techniques like quantization can improve efficiency for inference.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama-Chat-Summary-3.2-3B-Q4_K_M-GGUF --hf-file llama-chat-summary-3.2-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama-Chat-Summary-3.2-3B-Q4_K_M-GGUF --hf-file llama-chat-summary-3.2-3b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama-Chat-Summary-3.2-3B-Q4_K_M-GGUF --hf-file llama-chat-summary-3.2-3b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama-Chat-Summary-3.2-3B-Q4_K_M-GGUF --hf-file llama-chat-summary-3.2-3b-q4_k_m.gguf -c 2048
```