File size: 6,573 Bytes
73534d1
 
 
 
 
 
 
 
 
 
faceaa8
73534d1
816cf8e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73534d1
faceaa8
 
73534d1
faceaa8
73534d1
faceaa8
73534d1
faceaa8
73534d1
faceaa8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
816cf8e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
---
base_model: arcee-ai/Meraj-Mini
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- ar
- en
model-index:
- name: MawaredT1
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: wis-k/instruction-following-eval
      split: train
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 41.99
      name: averaged accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: SaylorTwift/bbh
      split: test
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 31.9
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: lighteval/MATH-Hard
      split: test
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 14.58
      name: exact match
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      split: train
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 11.3
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 18.68
      name: acc_norm
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 41.31
      name: accuracy
    source:
      url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Daemontatox%2FMawaredT1
      name: Open LLM Leaderboard
---
![image](./image.webp)
# Bilingual Assistant Model Card

## Overview

This bilingual language model is designed to support seamless text generation and understanding in both Arabic (ar) and English (en). Fine-tuned from the `arcee-ai/Meraj-Mini` base model, it offers robust multilingual capabilities optimized for various applications such as conversational agents, content creation, and multilingual text analysis.

### Key Highlights

- **Multilingual Proficiency:** Designed to handle complex linguistic nuances in both Arabic and English, ensuring high-quality outputs in both languages.
- **Performance Optimization:** Achieved 2x faster training through innovative methods provided by the [Unsloth](https://github.com/unslothai/unsloth) framework and the Hugging Face TRL library.
- **Transformer-Based Architecture:** Utilizes advanced transformer layers to deliver state-of-the-art performance in text generation and inference.

## Development Details

- **Developer:** Daemontatox
- **License:** Licensed under the Apache-2.0, ensuring open accessibility and flexibility for various use cases.
- **Base Model:** The model is a fine-tuned variant of `arcee-ai/Meraj-Mini`.
- **Frameworks Used:**
  - [Unsloth](https://github.com/unslothai/unsloth): Enabled faster and more efficient training.
  - Hugging Face TRL Library: Provided tools for reinforcement learning fine-tuning, enhancing model responsiveness and accuracy.

## Training Process

The fine-tuning process was conducted with a focus on:

- **Data Diversity:** Leveraged a bilingual corpus to ensure comprehensive language understanding across both supported languages.
- **Optimized Hardware Utilization:** Implemented Unsloth's accelerated training methods, significantly reducing resource consumption and training time.
- **Reinforcement Learning:** Used Hugging Face's TRL library to fine-tune the model's decision-making and response generation capabilities, particularly for conversational and contextual understanding.

## Applications

This model is suited for a variety of real-world applications, including:

1. **Conversational Agents:** Powering bilingual chatbots and virtual assistants for customer support and personal use.
2. **Content Generation:** Assisting in drafting multilingual articles, social media posts, and creative writing.
3. **Translation Support:** Providing context-aware translations and summaries across Arabic and English.
4. **Education:** Enhancing learning platforms by offering bilingual educational content and interactive learning experiences.

## Future Directions

Plans for extending the model's capabilities include:

- **Additional Language Support:** Exploring fine-tuning for additional languages.
- **Domain-Specific Training:** Specializing the model for industries such as healthcare, legal, and technical writing.
- **Optimization for Edge Devices:** Investigating quantization techniques to deploy the model on resource-constrained hardware like mobile devices and IoT platforms.


# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Daemontatox__MawaredT1-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Daemontatox%2FMawaredT1&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!

|      Metric       |Value (%)|
|-------------------|--------:|
|**Average**        |    26.63|
|IFEval (0-Shot)    |    41.99|
|BBH (3-Shot)       |    31.90|
|MATH Lvl 5 (4-Shot)|    14.58|
|GPQA (0-shot)      |    11.30|
|MuSR (0-shot)      |    18.68|
|MMLU-PRO (5-shot)  |    41.31|