Bilingual Assistant Model Card
Overview
This bilingual language model is designed to support seamless text generation and understanding in both Arabic (ar) and English (en). Fine-tuned from the arcee-ai/Meraj-Mini
base model, it offers robust multilingual capabilities optimized for various applications such as conversational agents, content creation, and multilingual text analysis.
Key Highlights
- Multilingual Proficiency: Designed to handle complex linguistic nuances in both Arabic and English, ensuring high-quality outputs in both languages.
- Performance Optimization: Achieved 2x faster training through innovative methods provided by the Unsloth framework and the Hugging Face TRL library.
- Transformer-Based Architecture: Utilizes advanced transformer layers to deliver state-of-the-art performance in text generation and inference.
Development Details
- Developer: Daemontatox
- License: Licensed under the Apache-2.0, ensuring open accessibility and flexibility for various use cases.
- Base Model: The model is a fine-tuned variant of
arcee-ai/Meraj-Mini
. - Frameworks Used:
- Unsloth: Enabled faster and more efficient training.
- Hugging Face TRL Library: Provided tools for reinforcement learning fine-tuning, enhancing model responsiveness and accuracy.
Training Process
The fine-tuning process was conducted with a focus on:
- Data Diversity: Leveraged a bilingual corpus to ensure comprehensive language understanding across both supported languages.
- Optimized Hardware Utilization: Implemented Unsloth's accelerated training methods, significantly reducing resource consumption and training time.
- Reinforcement Learning: Used Hugging Face's TRL library to fine-tune the model's decision-making and response generation capabilities, particularly for conversational and contextual understanding.
Applications
This model is suited for a variety of real-world applications, including:
- Conversational Agents: Powering bilingual chatbots and virtual assistants for customer support and personal use.
- Content Generation: Assisting in drafting multilingual articles, social media posts, and creative writing.
- Translation Support: Providing context-aware translations and summaries across Arabic and English.
- Education: Enhancing learning platforms by offering bilingual educational content and interactive learning experiences.
Future Directions
Plans for extending the model's capabilities include:
- Additional Language Support: Exploring fine-tuning for additional languages.
- Domain-Specific Training: Specializing the model for industries such as healthcare, legal, and technical writing.
- Optimization for Edge Devices: Investigating quantization techniques to deploy the model on resource-constrained hardware like mobile devices and IoT platforms.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 26.63 |
IFEval (0-Shot) | 41.99 |
BBH (3-Shot) | 31.90 |
MATH Lvl 5 (4-Shot) | 14.58 |
GPQA (0-shot) | 11.30 |
MuSR (0-shot) | 18.68 |
MMLU-PRO (5-shot) | 41.31 |
- Downloads last month
- 840
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Daemontatox/MawaredT1
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard41.990
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard31.900
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard14.580
- acc_norm on GPQA (0-shot)Open LLM Leaderboard11.300
- acc_norm on MuSR (0-shot)Open LLM Leaderboard18.680
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard41.310