Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,7 @@ language:
|
|
16 |
This repository demonstrates how to fine-tune the **Qwen 7B** model to create "Andy," an AI assistant for Minecraft. Using the **Unsloth framework**, this tutorial showcases efficient fine-tuning with 4-bit quantization and LoRA for scalable training on limited hardware.
|
17 |
|
18 |
## 🚀 Resources
|
|
|
19 |
- **Source Code**: [GitHub Repository](https://github.com/while-basic/mindcraft)
|
20 |
- **Colab Notebook**: [Colab Notebook](https://colab.research.google.com/drive/1Eq5dOjc6sePEt7ltt8zV_oBRqstednUT?usp=sharing)
|
21 |
- **Blog Article**: [Walkthrough](https://chris-celaya-blog.vercel.app/articles/unsloth-training)
|
@@ -34,6 +35,7 @@ This **readme.md** provides step-by-step instructions to:
|
|
34 |
---
|
35 |
|
36 |
### Key Features
|
|
|
37 |
- **Memory-Efficient Training**: Fine-tune large models on GPUs as low as T4 (Google Colab).
|
38 |
- **LoRA Integration**: Modify only key model layers for efficient domain-specific adaptation.
|
39 |
- **Minecraft-Optimized Dataset**: Format data using **ChatML templates** for seamless integration.
|
@@ -42,6 +44,7 @@ This **readme.md** provides step-by-step instructions to:
|
|
42 |
---
|
43 |
|
44 |
## Prerequisites
|
|
|
45 |
- **Python Knowledge**: Familiarity with basic programming concepts.
|
46 |
- **GPU Access**: T4 (Colab Free Tier) is sufficient; higher-tier GPUs like V100/A100 recommended.
|
47 |
- **Optional**: [Hugging Face Account](https://huggingface.co/) for model sharing.
|
@@ -156,6 +159,7 @@ model.save_pretrained("andy_minecraft_assistant")
|
|
156 |
---
|
157 |
|
158 |
## Optimization Tips
|
|
|
159 |
- Expand the dataset for broader Minecraft scenarios.
|
160 |
- Adjust training steps for better accuracy.
|
161 |
- Fine-tune inference parameters for more natural responses.
|
@@ -164,4 +168,16 @@ model.save_pretrained("andy_minecraft_assistant")
|
|
164 |
|
165 |
For more details on **Unsloth** or to contribute, visit [Unsloth GitHub](https://github.com/unslothai/unsloth).
|
166 |
|
167 |
-
Happy fine-tuning! 🎮
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
This repository demonstrates how to fine-tune the **Qwen 7B** model to create "Andy," an AI assistant for Minecraft. Using the **Unsloth framework**, this tutorial showcases efficient fine-tuning with 4-bit quantization and LoRA for scalable training on limited hardware.
|
17 |
|
18 |
## 🚀 Resources
|
19 |
+
|
20 |
- **Source Code**: [GitHub Repository](https://github.com/while-basic/mindcraft)
|
21 |
- **Colab Notebook**: [Colab Notebook](https://colab.research.google.com/drive/1Eq5dOjc6sePEt7ltt8zV_oBRqstednUT?usp=sharing)
|
22 |
- **Blog Article**: [Walkthrough](https://chris-celaya-blog.vercel.app/articles/unsloth-training)
|
|
|
35 |
---
|
36 |
|
37 |
### Key Features
|
38 |
+
|
39 |
- **Memory-Efficient Training**: Fine-tune large models on GPUs as low as T4 (Google Colab).
|
40 |
- **LoRA Integration**: Modify only key model layers for efficient domain-specific adaptation.
|
41 |
- **Minecraft-Optimized Dataset**: Format data using **ChatML templates** for seamless integration.
|
|
|
44 |
---
|
45 |
|
46 |
## Prerequisites
|
47 |
+
|
48 |
- **Python Knowledge**: Familiarity with basic programming concepts.
|
49 |
- **GPU Access**: T4 (Colab Free Tier) is sufficient; higher-tier GPUs like V100/A100 recommended.
|
50 |
- **Optional**: [Hugging Face Account](https://huggingface.co/) for model sharing.
|
|
|
159 |
---
|
160 |
|
161 |
## Optimization Tips
|
162 |
+
|
163 |
- Expand the dataset for broader Minecraft scenarios.
|
164 |
- Adjust training steps for better accuracy.
|
165 |
- Fine-tune inference parameters for more natural responses.
|
|
|
168 |
|
169 |
For more details on **Unsloth** or to contribute, visit [Unsloth GitHub](https://github.com/unslothai/unsloth).
|
170 |
|
171 |
+
Happy fine-tuning! 🎮
|
172 |
+
|
173 |
+
## Citation
|
174 |
+
|
175 |
+
@misc{celaya2025minecraft,
|
176 |
+
author = {Christopher B. Celaya},
|
177 |
+
title = {Efficient Fine-Tuning of Large Language Models - A Minecraft AI Assistant Tutorial},
|
178 |
+
year = {2025},
|
179 |
+
publisher = {GitHub},
|
180 |
+
journal = {GitHub repository},
|
181 |
+
howpublished = {\url{https://github.com/kolbytn/mindcraft}},
|
182 |
+
note = {\url{https://chris-celaya-blog.vercel.app/articles/unsloth-training}}
|
183 |
+
}
|