File size: 1,188 Bytes
a211b0a
 
 
 
 
 
 
 
 
 
c870d6d
a211b0a
db15e4b
 
a211b0a
 
76728cd
 
a211b0a
 
 
 
 
 
 
 
 
f6bcec1
 
 
76728cd
 
 
 
 
f6bcec1
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
base_model: unsloth/Qwen2-0.5B-Instruct-bnb-4bit
datasets:
- microsoft/orca-math-word-problems-200k
---

**Coding model comming soon!**

# Uploaded  model

- **Developed by:** NotAiLOL
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-0.5B-Instruct-bnb-4bit

This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)

# Details

This model was trained on [microsoft/orca-math-word-problems-200k](https://huggingface.co/datasets/microsoft/orca-math-word-problems-200k) for 3 epochs with **rsLoRA** + **QLoRA**.

**Training Loss Graph**
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6534f64c6e86d670ffb3b1bc/F6Jvbttj275iIhmRFdLeR.png)

The model follows the Alpaca format:
```
<|im_start|>system
You are a professional mathematician.|im_end|>

<|im_start|>user
{}<|im_end|>

<|im_start|>assistant
{}
```