Quazim0t0 commited on
Commit
1b5dbbb
·
verified ·
1 Parent(s): 4c1a6d6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -0
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: unsloth/phi-4-unsloth-bnb-4bit
3
+ tags:
4
+ - text-generation-inference
5
+ - transformers
6
+ - unsloth
7
+ - llama
8
+ - trl
9
+ - sft
10
+ license: apache-2.0
11
+ language:
12
+ - en
13
+ datasets:
14
+ - bespokelabs/Bespoke-Stratos-17k
15
+ ---
16
+
17
+ # Uploaded model
18
+
19
+ - **Developed by:** Quazim0t0
20
+ - **License:** apache-2.0
21
+ - **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit
22
+ - **Trained for 8 Hours on A800 with the Bespoke Stratos 17k Dataset.**
23
+ - https://openwebui.com/f/quaz93/phi4_turn_r1_distill_thought_function_v1
24
+
25
+ # Phi4 Turn R1Distill LoRA Adapters
26
+
27
+ ## Overview
28
+ These **LoRA adapters** were trained using diverse **reasoning datasets** that incorporate structured **Thought** and **Solution** responses to enhance logical inference. This project was designed to **test the R1 dataset** on **Phi-4**, aiming to create a **lightweight, fast, and efficient reasoning model**.
29
+
30
+ All adapters were fine-tuned using an **NVIDIA A800 GPU**, ensuring high performance and compatibility for continued training, merging, or direct deployment.
31
+ As part of an open-source initiative, all resources are made **publicly available** for unrestricted research and development.
32
+
33
+ ---
34
+
35
+ ## LoRA Adapters
36
+ Below are the currently available LoRA fine-tuned adapters (**as of January 30, 2025**):
37
+
38
+ - [Phi4.Turn.R1Distill-Lora1](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora1)
39
+ - [Phi4.Turn.R1Distill-Lora2](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora2)
40
+ - [Phi4.Turn.R1Distill-Lora3](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora3)
41
+ - [Phi4.Turn.R1Distill-Lora4](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora4)
42
+ - [Phi4.Turn.R1Distill-Lora5](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora5)
43
+ - [Phi4.Turn.R1Distill-Lora6](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora6)
44
+ - [Phi4.Turn.R1Distill-Lora7](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora7)
45
+ - [Phi4.Turn.R1Distill-Lora8](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora8)
46
+
47
+ ---
48
+
49
+ ## GGUF Full & Quantized Models
50
+ To facilitate broader testing and real-world inference, **GGUF Full and Quantized versions** have been provided for evaluation on **Open WebUI** and other LLM interfaces.
51
+
52
+ ### **Version 1**
53
+ - [Phi4.Turn.R1Distill.Q8_0](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.Q8_0)
54
+ - [Phi4.Turn.R1Distill.Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.Q4_k)
55
+ - [Phi4.Turn.R1Distill.16bit](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.16bit)
56
+
57
+ ### **Version 1.1**
58
+ - [Phi4.Turn.R1Distill_v1.1_Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.1_Q4_k)
59
+
60
+ ### **Version 1.2**
61
+ - [Phi4.Turn.R1Distill_v1.2_Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.2_Q4_k)
62
+
63
+ ### **Version 1.3**
64
+ - [Phi4.Turn.R1Distill_v1.3_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.3_Q4_k-GGUF)
65
+
66
+ ### **Version 1.4**
67
+ - [Phi4.Turn.R1Distill_v1.4_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.4_Q4_k-GGUF)
68
+
69
+ ### **Version 1.5**
70
+ - [Phi4.Turn.R1Distill_v1.5_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.5_Q4_k-GGUF)
71
+
72
+ ---
73
+
74
+ ## Usage
75
+
76
+ ### **Loading LoRA Adapters with `transformers` and `peft`**
77
+ To load and apply the LoRA adapters on Phi-4, use the following approach:
78
+
79
+ ```python
80
+ from transformers import AutoModelForCausalLM, AutoTokenizer
81
+ from peft import PeftModel
82
+
83
+ base_model = "microsoft/Phi-4"
84
+ lora_adapter = "Quazim0t0/Phi4.Turn.R1Distill-Lora1"
85
+
86
+ tokenizer = AutoTokenizer.from_pretrained(base_model)
87
+ model = AutoModelForCausalLM.from_pretrained(base_model)
88
+ model = PeftModel.from_pretrained(model, lora_adapter)
89
+
90
+ model.eval()
91
+