Model Card: TunChat-V0.2

Model Overview:

  • Model Name: TunChat-V0.1
  • Model Size: 2B parameters
  • Instruction-Tuned: Yes
  • Language: Tunisian Dialect
  • Use Case Focus: Conversational exchanges, translation, summarization, content generation, and cultural research.

Model Description: TunChat-V0.1 is a 2-billion parameter language model specifically instruction-tuned for the Tunisian dialect. It is designed to handle tasks such as conversational exchanges, informal text summarization, and culturally-aware content generation. The model is optimized to understand and generate text in Tunisian Dialect, enabling enhanced performance for applications targeting Tunisian users.

Intended Use:

  • Conversational agents and chatbots operating in Tunisian Dialect.
  • Translation, summarization, and content generation in informal Tunisian dialect.
  • Supporting cultural research related to Tunisian language and heritage.

How to Use:

import torch
from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="saifamdouni/TunChat-V0.2",
    model_kwargs={"torch_dtype": torch.bfloat16},
    device="cuda" # replace with "mps" to run on a Mac device
)

messages = [
    {"role": "user", "content": 'شكون صنعك'},
]

outputs = pipe(messages,
              max_new_tokens=2048,
              do_sample=True,
              top_p=0.95,
              temperature=0.7,
              top_k=50)
assistant_response = outputs[0]["generated_text"][-1]["content"].strip()
print(assistant_response)

صنعوني جماعة من المهندسين والمطورين التوانسة. يحبوا يطوّروا الذكاء الاصطناعي في تونس و يسهلوا استخدامه باللهجة متاعنا.

Quantized Versions:

  • GGUF quantized versions will be released later.

Training Dataset:

  • Tun-SFT dataset (to be released later):
    • A mix between organically collected and synthetically generated data

Limitations and Ethical Considerations:

  • The model may occasionally produce incorrect or biased responses.
  • The model may occasionally produce culturally inappropriate responses.
  • It may not perform optimally on formal Tunisian Arabic texts.

Future Plans:

  • Release of GGUF quantized versions.
  • Open-source availability of the Tun-SFT dataset.

Author: Saif Eddine Amdouni

Downloads last month
503
Safetensors
Model size
2.61B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for saifamdouni/TunCHAT-V0.2

Finetuned
(75)
this model
Quantizations
1 model