danielhanchen
commited on
Commit
•
c389abe
1
Parent(s):
21714c3
Upload 7 files
Browse files- Alpaca_+_Codellama_34b_full_example.ipynb +17 -6
- Alpaca_+_Llama_7b_full_example.ipynb +3 -3
- Alpaca_+_Mistral_7b_full_example.ipynb +0 -0
- Alpaca_+_TinyLlama_+_RoPE_Scaling_full_example.ipynb +3 -3
- ChatML_+_chat_templates_+_Mistral_7b_full_example.ipynb +0 -0
- DPO_Zephyr_Unsloth_Example.ipynb +1 -1
- Mistral_7b_Text_Completion_Raw_Text_training_full_example.ipynb +0 -0
Alpaca_+_Codellama_34b_full_example.ipynb
CHANGED
@@ -3,7 +3,9 @@
|
|
3 |
{
|
4 |
"cell_type": "markdown",
|
5 |
"source": [
|
6 |
-
"To run this, press \"Runtime\" and press \"Run all\" on a
|
|
|
|
|
7 |
"<div class=\"align-center\">\n",
|
8 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"110\"></a>\n",
|
9 |
" <a href=\"https://discord.gg/u54VK8m8tk\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/Discord.png\" width=\"150\"></a>\n",
|
@@ -31,13 +33,11 @@
|
|
31 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
32 |
"if major_version >= 8:\n",
|
33 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
34 |
-
" !pip install \"unsloth[
|
35 |
"else:\n",
|
36 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
37 |
" !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
38 |
-
"pass
|
39 |
-
"\n",
|
40 |
-
"!pip install \"git+https://github.com/huggingface/transformers.git\" # Native 4bit loading works!"
|
41 |
]
|
42 |
},
|
43 |
{
|
@@ -728,6 +728,7 @@
|
|
728 |
"\n",
|
729 |
"trainer = SFTTrainer(\n",
|
730 |
" model = model,\n",
|
|
|
731 |
" train_dataset = dataset,\n",
|
732 |
" dataset_text_field = \"text\",\n",
|
733 |
" max_seq_length = max_seq_length,\n",
|
@@ -1410,7 +1411,9 @@
|
|
1410 |
"source": [
|
1411 |
"<a name=\"Save\"></a>\n",
|
1412 |
"### Saving, loading finetuned models\n",
|
1413 |
-
"To save the final model, either use Huggingface's `push_to_hub` for an online save or `save_pretrained` for a local save
|
|
|
|
|
1414 |
],
|
1415 |
"metadata": {
|
1416 |
"id": "uMuVrWbjAzhc"
|
@@ -1498,6 +1501,14 @@
|
|
1498 |
"cell_type": "markdown",
|
1499 |
"source": [
|
1500 |
"And we're done! If you have any questions on Unsloth, we have a [Discord](https://discord.gg/u54VK8m8tk) channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!\n",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1501 |
"<div class=\"align-center\">\n",
|
1502 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"110\"></a>\n",
|
1503 |
" <a href=\"https://discord.gg/u54VK8m8tk\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/Discord.png\" width=\"150\"></a>\n",
|
|
|
3 |
{
|
4 |
"cell_type": "markdown",
|
5 |
"source": [
|
6 |
+
"To run this, press \"Runtime\" and press \"Run all\" on a A100 Colab instance!\n",
|
7 |
+
"\n",
|
8 |
+
"**[NOTE]** You might be lucky if an A100 is free!. If not, try our Mistral 7b notebook on a free Tesla T4 [here](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing).\n",
|
9 |
"<div class=\"align-center\">\n",
|
10 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"110\"></a>\n",
|
11 |
" <a href=\"https://discord.gg/u54VK8m8tk\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/Discord.png\" width=\"150\"></a>\n",
|
|
|
33 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
34 |
"if major_version >= 8:\n",
|
35 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
36 |
+
" !pip install \"unsloth[colab-ampere] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
37 |
"else:\n",
|
38 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
39 |
" !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
40 |
+
"pass"
|
|
|
|
|
41 |
]
|
42 |
},
|
43 |
{
|
|
|
728 |
"\n",
|
729 |
"trainer = SFTTrainer(\n",
|
730 |
" model = model,\n",
|
731 |
+
" tokenizer = tokenizer,\n",
|
732 |
" train_dataset = dataset,\n",
|
733 |
" dataset_text_field = \"text\",\n",
|
734 |
" max_seq_length = max_seq_length,\n",
|
|
|
1411 |
"source": [
|
1412 |
"<a name=\"Save\"></a>\n",
|
1413 |
"### Saving, loading finetuned models\n",
|
1414 |
+
"To save the final model, either use Huggingface's `push_to_hub` for an online save or `save_pretrained` for a local save.\n",
|
1415 |
+
"\n",
|
1416 |
+
"To save to `GGUF` / `llama.cpp`, or for model merging, use `model.merge_and_unload` first, then save the model. Maxime Labonne's [llm-course](https://mlabonne.github.io/blog/posts/Quantize_Llama_2_models_using_ggml.html) has a nice tutorial on converting HF to GGUF! This [issue](https://github.com/ggerganov/llama.cpp/issues/3097) might be helpful for more info."
|
1417 |
],
|
1418 |
"metadata": {
|
1419 |
"id": "uMuVrWbjAzhc"
|
|
|
1501 |
"cell_type": "markdown",
|
1502 |
"source": [
|
1503 |
"And we're done! If you have any questions on Unsloth, we have a [Discord](https://discord.gg/u54VK8m8tk) channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!\n",
|
1504 |
+
"\n",
|
1505 |
+
"We also have other notebooks on:\n",
|
1506 |
+
"1. Zephyr DPO [free Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing)\n",
|
1507 |
+
"2. Llama 7b [free Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing)\n",
|
1508 |
+
"3. TinyLlama full Alpaca 52K in under 80 hours [free Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)\n",
|
1509 |
+
"4. Mistral 7b [free Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing)\n",
|
1510 |
+
"5. Llama 7b [free Kaggle](https://www.kaggle.com/danielhanchen/unsloth-alpaca-t4-ddp)\n",
|
1511 |
+
"\n",
|
1512 |
"<div class=\"align-center\">\n",
|
1513 |
" <a href=\"https://github.com/unslothai/unsloth\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png\" width=\"110\"></a>\n",
|
1514 |
" <a href=\"https://discord.gg/u54VK8m8tk\"><img src=\"https://github.com/unslothai/unsloth/raw/main/images/Discord.png\" width=\"150\"></a>\n",
|
Alpaca_+_Llama_7b_full_example.ipynb
CHANGED
@@ -31,7 +31,7 @@
|
|
31 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
32 |
"if major_version >= 8:\n",
|
33 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
34 |
-
" !pip install \"unsloth[
|
35 |
"else:\n",
|
36 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
37 |
" !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
@@ -1243,9 +1243,9 @@
|
|
1243 |
"source": [
|
1244 |
"if False:\n",
|
1245 |
" # I highly do NOT suggest - use Unsloth if possible\n",
|
1246 |
-
" from peft import
|
1247 |
" from transformers import AutoTokenizer\n",
|
1248 |
-
" model =
|
1249 |
" \"lora_model\", # YOUR MODEL YOU USED FOR TRAINING\n",
|
1250 |
" load_in_4bit = load_in_4bit,\n",
|
1251 |
" )\n",
|
|
|
31 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
32 |
"if major_version >= 8:\n",
|
33 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
34 |
+
" !pip install \"unsloth[colab-ampere] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
35 |
"else:\n",
|
36 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
37 |
" !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
|
|
1243 |
"source": [
|
1244 |
"if False:\n",
|
1245 |
" # I highly do NOT suggest - use Unsloth if possible\n",
|
1246 |
+
" from peft import AutoPeftModelForCausalLM\n",
|
1247 |
" from transformers import AutoTokenizer\n",
|
1248 |
+
" model = AutoPeftModelForCausalLM.from_pretrained(\n",
|
1249 |
" \"lora_model\", # YOUR MODEL YOU USED FOR TRAINING\n",
|
1250 |
" load_in_4bit = load_in_4bit,\n",
|
1251 |
" )\n",
|
Alpaca_+_Mistral_7b_full_example.ipynb
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
Alpaca_+_TinyLlama_+_RoPE_Scaling_full_example.ipynb
CHANGED
@@ -33,7 +33,7 @@
|
|
33 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
34 |
"if major_version >= 8:\n",
|
35 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
36 |
-
" !pip install \"unsloth[
|
37 |
"else:\n",
|
38 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
39 |
" !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
@@ -2420,9 +2420,9 @@
|
|
2420 |
"source": [
|
2421 |
"if False:\n",
|
2422 |
" # I highly do NOT suggest - use Unsloth if possible\n",
|
2423 |
-
" from peft import
|
2424 |
" from transformers import AutoTokenizer\n",
|
2425 |
-
" model =
|
2426 |
" \"lora_model\", # YOUR MODEL YOU USED FOR TRAINING\n",
|
2427 |
" load_in_4bit = load_in_4bit,\n",
|
2428 |
" )\n",
|
|
|
33 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
34 |
"if major_version >= 8:\n",
|
35 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
36 |
+
" !pip install \"unsloth[colab-ampere] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
37 |
"else:\n",
|
38 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
39 |
" !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
|
|
2420 |
"source": [
|
2421 |
"if False:\n",
|
2422 |
" # I highly do NOT suggest - use Unsloth if possible\n",
|
2423 |
+
" from peft import AutoPeftModelForCausalLM\n",
|
2424 |
" from transformers import AutoTokenizer\n",
|
2425 |
+
" model = AutoPeftModelForCausalLM.from_pretrained(\n",
|
2426 |
" \"lora_model\", # YOUR MODEL YOU USED FOR TRAINING\n",
|
2427 |
" load_in_4bit = load_in_4bit,\n",
|
2428 |
" )\n",
|
ChatML_+_chat_templates_+_Mistral_7b_full_example.ipynb
ADDED
The diff for this file is too large to render.
See raw diff
|
|
DPO_Zephyr_Unsloth_Example.ipynb
CHANGED
@@ -32,7 +32,7 @@
|
|
32 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
33 |
"if major_version >= 8:\n",
|
34 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
35 |
-
" !pip install \"unsloth[
|
36 |
"else:\n",
|
37 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
38 |
" !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
|
|
32 |
"major_version, minor_version = torch.cuda.get_device_capability()\n",
|
33 |
"if major_version >= 8:\n",
|
34 |
" # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n",
|
35 |
+
" !pip install \"unsloth[colab-ampere] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
36 |
"else:\n",
|
37 |
" # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n",
|
38 |
" !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n",
|
Mistral_7b_Text_Completion_Raw_Text_training_full_example.ipynb
ADDED
The diff for this file is too large to render.
See raw diff
|
|