{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "8LKIA_qnVKOz" }, "source": [ "To run this, press \"Runtime\" and press \"Run all\" on a **free** Tesla T4 Google Colab instance!\n", "
\n", "\n", "To install Unsloth on your own computer, follow the installation instructions on our Github page [here](https://github.com/unslothai/unsloth#installation-instructions---conda).\n", "\n", "You will learn how to do [data prep](#Data), how to [train](#Train), how to [run the model](#Inference), & [how to save it](#Save) (eg for Llama.cpp).\n", "\n", "**[NOTE]** TinyLlama was trained on 2048 max tokens. With Unsloth, we can arbitrarily set the sequence length we want via `max_seq_length=4096`. We do RoPE Scaling internally to magically extend the maximum context size!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2eSvM9zX_2d3" }, "outputs": [], "source": [ "%%capture\n", "import torch\n", "major_version, minor_version = torch.cuda.get_device_capability()\n", "if major_version >= 8:\n", " # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)\n", " !pip install \"unsloth[colab-ampere] @ git+https://github.com/unslothai/unsloth.git\"\n", "else:\n", " # Use this for older GPUs (V100, Tesla T4, RTX 20xx)\n", " !pip install \"unsloth[colab] @ git+https://github.com/unslothai/unsloth.git\"\n", "pass" ] }, { "cell_type": "markdown", "metadata": { "id": "r2v_X2fA0Df5" }, "source": [ "* We support Llama, Mistral, CodeLlama, TinyLlama, Vicuna, Open Hermes etc\n", "* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n", "* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n", "* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n", "* [**NEW**] With [PR 26037](https://github.com/huggingface/transformers/pull/26037), we support downloading 4bit models **4x faster**! [Our repo](https://huggingface.co/unsloth) has Llama, Mistral 4bit models." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 474, "referenced_widgets": [ "188ee79c6ef44ae896a93c0184780f41", "155663a36c2d4949a6bba5ee2b4224b1", "5e48035e021b40aea0317f856b842379", "d9a92404c9f74425ac0cff862001b168", "56d1a29a8e724aa98b0e9b2d8c995713", "94a7e3827ab241dbb75d3d816354c694", "c21c74c7956d425c9049fb1cdc8b7d07", "49de6fa3e53e4259b87caabcb1cc50b2", "b53a34cdb16a4b8eb482b91f407b688a", "982fd1d338e148b2ba66050e03897dbb", "2be4a7d986c44fd4a222dc2f1d7b9918", "c8fd063c9d9c4270b3080677fdd92632", "cd700122445641279b905116cc3b73a1", "e2901aa31b3e4ba39ba138468af5c95b", "20fcb167b6c04792b627f7d13da5db94", "2c35f51a2e9e439f9757d27a4e8cfb5a", "4b7057f7b1aa4e9ea73e2aff19d5d7f5", "66917e3680214990a27525ee10ef0259", "6cdb22652f1f47648c8dfe632e7e87b5", "02b75597dc7c4d1cbf3c51b1fddd3ec6", "008b398b553c47e1a3c9bedac5644642", "78ef61071c244b86b20227e775267aa0", "e517fa5b72d2455eb39101c368165b38", "d532a4ba00d7488ead1366f690a28c41", "ff667ca8001e44b0a6146daf3de94e55", "49ac5705579c4035b864db8d06a3efd6", "800461f0e86545e6a99fbb868e028a27", "70df82f25eeb4f9f83778c3c4cab7ddd", "9986a56c71b4451ea852e49e34ebd2fd", "ed980dda0e504e29ae8ca06e1535073b", "680c3c2c0ea24be0b7a29ad770f28abd", "112fd48addfb42cba27b1939375d5e4c", "08ddb073da9b4a4981e5452df0e80c5a", "5579f451f9bc490fa924455dcf6351f0", "92d80dfad00e4b0abfa3054752385c21", "95ed136481974ccd84eb0488ed5c328d", "d52459dc6f224b169a49d1cbd2d92f50", "e73ae12084e848a8aa82e73c5efa9d85", "6324e9afb3b64beaa84a80622e4aa4af", "fe63f948843240c184084e159a8feef3", "78dcd43379e64fb994fc5d8ad392cd32", "6d258790472640f493b26790298aa366", "9eb629533a45449ea819ed32c2a35a41", "5adbfe3e9c424429851258c009d72f26", "5e683f7a34e74032a2543ce2db6d9ceb", "5d20dbac1cd74e378902f7f6ef71e1f5", "323cca9bfd6a4342b614043b95b8f722", "b792c64d4aed4f90938c58344961d32c", "56109c78c5454216af1f1e1910bee352", "a8a15bf2150246999bf268f0731f9e43", "6ffc5ef1a24249bfbaea21fbc8990298", "254643fe17ec4613bc26de5b3330c7f1", "9725fce622b845b192fbb319f01f56d8", "ee2af32e73d542f9b089d761eed9d801", "138e959064c04ff6a2b6d3a4eb0f6359", "083e0fb519b843e48d283a638e8a6dcb", "56bc8dce7dfb44789f0edb09df8dd450", "64d035fbc33f417d970f39c5aa14c93f", "437886691cdf40d4861575b7b91feb89", "8809b53bd5f74f228c0ab6df9a918796", "c2628e20f1154acd94bbcb282fd6554d", "8680fbf7a67940e7ac6f6d59873ef4e9", "12634ac7d05e490493d5c068087ff978", "0d950e93a67e4402949fce40ba6097b6", "103ed45882c3450a8bd9e55940d88c50", "6abff53f5ed74b8ab7bfe981547d44b4", "acedd5f384554c108b806e7ceef84382", "78f3ba544cdb4e36b76f3a0fe4d79075", "876f9871caac4fba8c6a0f86f98e2314", "4b2bf0e58403475a8b201e87cf3efb4b", "68a8b3c46c15441d9a67d1b578c64f1d", "a06afe98f7fa4d6a99a2defe9c7828cc", "063e74555fbb4ab381cdcbcabc37461e", "1cffaffc53624ffab7ce7a5830fcad07", "47255152beba4690b21cce344d30a548", "4e7766fa9a1048bcacaf4cea16f24d90", "ffa5224eb65142d3a3cd053fd43c14b7" ] }, "id": "QmUBVEnvCDJv", "outputId": "532355ad-a380-4841-d692-c69e9dd97c90" }, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "/usr/local/lib/python3.10/dist-packages/unsloth/__init__.py:67: UserWarning: CUDA is not linked properly.\n", "We shall run `ldconfig /usr/lib64-nvidia` to try to fix it.\n", " warnings.warn(\n" ] }, { "output_type": "display_data", "data": { "text/plain": [ "config.json: 0%| | 0.00/1.09k [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "188ee79c6ef44ae896a93c0184780f41" } }, "metadata": {} }, { "output_type": "stream", "name": "stderr", "text": [ "==((====))== Unsloth: Fast Llama patching release 2024.1\n", " \\\\ /| GPU: Tesla T4. Max memory: 14.748 GB\n", "O^O/ \\_/ \\ CUDA capability = 7.5. Xformers = 0.0.22.post7. FA = False.\n", "\\ / Pytorch version: 2.1.0+cu121. CUDA Toolkit = 12.1\n", " \"-____-\" bfloat16 = FALSE. Platform = Linux\n", "\n", "Unsloth: unsloth/tinyllama-bnb-4bit can only handle sequence lengths of at most 2048.\n", "But with kaiokendev's RoPE scaling of 2.0, it can be magically be extended to 4096!\n", "You passed `quantization_config` to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` attribute will be overwritten with the one you passed to `from_pretrained`.\n" ] }, { "output_type": "display_data", "data": { "text/plain": [ "model.safetensors: 0%| | 0.00/762M [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "c8fd063c9d9c4270b3080677fdd92632" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "generation_config.json: 0%| | 0.00/129 [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "e517fa5b72d2455eb39101c368165b38" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "tokenizer_config.json: 0%| | 0.00/894 [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "5579f451f9bc490fa924455dcf6351f0" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "tokenizer.model: 0%| | 0.00/500k [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "5e683f7a34e74032a2543ce2db6d9ceb" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "tokenizer.json: 0%| | 0.00/1.84M [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "083e0fb519b843e48d283a638e8a6dcb" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "special_tokens_map.json: 0%| | 0.00/438 [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "acedd5f384554c108b806e7ceef84382" } }, "metadata": {} } ], "source": [ "from unsloth import FastLanguageModel\n", "import torch\n", "max_seq_length = 4096 # Choose any! We auto support RoPE Scaling internally!\n", "dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+\n", "load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.\n", "\n", "# 4bit pre quantized models we support for 4x faster downloading + no OOMs.\n", "fourbit_models = [\n", " \"unsloth/mistral-7b-bnb-4bit\",\n", " \"unsloth/mistral-7b-instruct-v0.2-bnb-4bit\",\n", " \"unsloth/llama-2-7b-bnb-4bit\",\n", " \"unsloth/llama-2-13b-bnb-4bit\",\n", " \"unsloth/codellama-34b-bnb-4bit\",\n", " \"unsloth/tinyllama-bnb-4bit\",\n", "] # More models at https://huggingface.co/unsloth\n", "\n", "model, tokenizer = FastLanguageModel.from_pretrained(\n", " model_name = \"unsloth/tinyllama-bnb-4bit\", # \"unsloth/tinyllama\" for 16bit loading\n", " max_seq_length = max_seq_length,\n", " dtype = dtype,\n", " load_in_4bit = load_in_4bit,\n", " # token = \"hf_...\", # use one if using gated models like meta-llama/Llama-2-7b-hf\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "S3xvsMEWyJbZ" }, "source": [ "**[NOTE]** TinyLlama's internal maximum sequence length is 2048. We use RoPE Scaling to extend it to 4096 with Unsloth!" ] }, { "cell_type": "markdown", "metadata": { "id": "SXd9bTZd1aaL" }, "source": [ "We now add LoRA adapters so we only need to update 1 to 10% of all parameters!\n", "\n", "**[NOTE]** We set `gradient_checkpointing=False` ONLY for TinyLlama since Unsloth saves tonnes of memory usage. This does NOT work for `llama-2-7b` or `mistral-7b` since the memory usage will still exceed Tesla T4's 15GB. GC recomputes the forward pass during the backward pass, saving loads of memory.\n", "\n", "`**[IF YOU GET OUT OF MEMORY]**` set `gradient_checkpointing` to `True`." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "6bZsfBuZDeCL", "outputId": "3de492b6-0bb6-4fd1-a0e5-f5e0ef36dcd5" }, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "Unsloth 2024.1 patched 22 layers with 22 QKV layers, 22 O layers and 22 MLP layers.\n" ] } ], "source": [ "model = FastLanguageModel.get_peft_model(\n", " model,\n", " r = 32, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128\n", " target_modules = [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\n", " \"gate_proj\", \"up_proj\", \"down_proj\",],\n", " lora_alpha = 32,\n", " lora_dropout = 0, # Currently only supports dropout = 0\n", " bias = \"none\", # Currently only supports bias = \"none\"\n", " use_gradient_checkpointing = False, # @@@ IF YOU GET OUT OF MEMORY - set to True @@@\n", " random_state = 3407,\n", " use_rslora = False, # We support rank stabilized LoRA\n", " loftq_config = None, # And LoftQ\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "vITh0KVJ10qX" }, "source": [ "\n", "### Data Prep\n", "We now use the Alpaca dataset from [yahma](https://huggingface.co/datasets/yahma/alpaca-cleaned), which is a filtered version of 52K of the original [Alpaca dataset](https://crfm.stanford.edu/2023/03/13/alpaca.html). You can replace this code section with your own data prep.\n", "\n", "**[NOTE]** To train only on completions (ignoring the user's input) read TRL's docs [here](https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only).\n", "\n", "**[NOTE]** Remember to add the **EOS_TOKEN** to the tokenized output!! Otherwise you'll get infinite generations!\n", "\n", "If you want to use the `ChatML` template for ShareGPT datasets, try our conversational [notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing).\n", "\n", "For text completions like novel writing, try this [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 145, "referenced_widgets": [ "a681577f4a00440496f2ee70e2f1198b", "c64f215f76c345c2aa1b98d23364ece6", "d32436f7cb7c49a88c2fa867add54367", "17e0b9c3b30e4d9ab34f684b0f0c8865", "93e320b8f6f242d3805ef65c81f33718", "d930f897415947ada08ce063f9cb8bed", "54e400aa31c0492fbbfd0d291eb9dd7f", "594b86fdbc6c4ac582aee52d0099a590", "d6b6c9420c644d42b314ac7599bd75b8", "f68e5c7237cd4260be1644f4a33ba4b9", "f5fb96b55ae943d5b0a12cef560be493", "de996974999a4ebbb8a11a8889b2c683", "030a45a01bab49d29a6058e4f1f3f9ba", "b4b50aea923441d88530edd555673a0e", "e138d8341f094d60a05f2124f4d9fc50", "2ffed33dbe52437591ed87c34f87eb61", "d3498bc5c493472e9d785129ca8293de", "08f540a5520b4fa5811a6c229231ab5f", "7806f5a3a9404bbcbcedfaf8d827a35c", "1223ef92bfd542aaa7f789b8bd799609", "f5ee29b85e664ef180df1d1a60db8223", "b4ac02ab0c814609824a3405d6e31206", "8375b238b9084bceb04c8d67e148f9dc", "a07890ec0fd740d3a84a8aae4979355a", "204dcfe6d8404137b8cb0ac57f82cb08", "becf56b2d70542b1bc330f09bb8c6174", "4dcaad14afbf4a2791d230239ff01b3c", "b51201dc59ab47aea72930e60625d2cb", "553c2e573f5947b4b5bdcb8b2168f016", "4172136737eb4cf48ce6ee5aa566224a", "6d3d3a38139147dfae6ce54b2adcbbf6", "5a55d70b2d0543d082603c586787b9ac", "b815a7d8a9c84a088537c8f3549e8930", "12f020de361f41128fac402f89be26d8", "4eca6c289693485ba95ee9badb47389f", "83736d38ebea43e3b9c58ce36ef1351e", "385e918a99614ca5af1f3a058aae0d31", "ece8d196afd24a0cae98b6233f7e13cf", "af8a211d04f647bfb90cd72ed16bed05", "22e26e8fcc344cd9a9a76a1e2c8bbf53", "f097c1e989394c7f8a226f08ff7a6e9a", "12e46295371a46bc94e7632a49917e60", "82b55fbb2ba74150900dddf1143e54f0", "c6e634b3cd924881a2ac9745fe32cd2f" ] }, "id": "LjY75GoYUCB8", "outputId": "3970582b-742e-41c7-813d-27b978b49c29" }, "outputs": [ { "output_type": "display_data", "data": { "text/plain": [ "Downloading readme: 0%| | 0.00/11.6k [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "a681577f4a00440496f2ee70e2f1198b" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "Downloading data: 0%| | 0.00/44.3M [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "de996974999a4ebbb8a11a8889b2c683" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "Generating train split: 0 examples [00:00, ? examples/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "8375b238b9084bceb04c8d67e148f9dc" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "Map: 0%| | 0/51760 [00:00, ? examples/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "12f020de361f41128fac402f89be26d8" } }, "metadata": {} } ], "source": [ "alpaca_prompt = \"\"\"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n", "\n", "### Instruction:\n", "{}\n", "\n", "### Input:\n", "{}\n", "\n", "### Response:\n", "{}\"\"\"\n", "\n", "EOS_TOKEN = tokenizer.eos_token\n", "def formatting_prompts_func(examples):\n", " instructions = examples[\"instruction\"]\n", " inputs = examples[\"input\"]\n", " outputs = examples[\"output\"]\n", " texts = []\n", " for instruction, input, output in zip(instructions, inputs, outputs):\n", " # Must add EOS_TOKEN, otherwise your generation will go on forever!\n", " text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN\n", " texts.append(text)\n", " return { \"text\" : texts, }\n", "pass\n", "\n", "from datasets import load_dataset\n", "dataset = load_dataset(\"yahma/alpaca-cleaned\", split = \"train\")\n", "dataset = dataset.map(formatting_prompts_func, batched = True,)" ] }, { "cell_type": "markdown", "metadata": { "id": "idAEIeSQ3xdS" }, "source": [ "\n", "### Train the model\n", "Now let's use Huggingface TRL's `SFTTrainer`! More docs here: [TRL SFT docs](https://huggingface.co/docs/trl/sft_trainer). We do 1 full epoch which makes Alpaca run in 80ish minutes! We also support TRL's `DPOTrainer`! See our DPO tutorial on a free Google Colab instance [here](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 211, "referenced_widgets": [ "47b91d628c824007ba2a899908543fc4", "544c9ffdfd6346d19865bf34d43e26a3", "0a88fd16d72442198803c4c082bdd5ab", "e245c09fe14a49929a8274fdc356fd25", "5cb07c2ef3be4a9f90cac8a53565040f", "93cfec82c4e24cd8bbf6d0044c92f9a4", "7aea5b2b4cb944efae3dc1de39b72f83", "444156136b864dd4896c9f83135772f6", "0da54be740b943c38a4aa805692b35f2", "3e5a6d4cd4bb45879ed67ab7d9cbcc16", "22a3640c1e344b5c922e488a9a2feaca" ] }, "id": "95_Nn-89DhsL", "outputId": "0be04edc-0aff-481c-a52f-fce8283ac502" }, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "PyTorch: setting up devices\n", "The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).\n", "loading file tokenizer.model from cache at /root/.cache/huggingface/hub/models--unsloth--tinyllama-bnb-4bit/snapshots/fc56510003ea9d49362400b8a362345150802c31/tokenizer.model\n", "loading file tokenizer.json from cache at /root/.cache/huggingface/hub/models--unsloth--tinyllama-bnb-4bit/snapshots/fc56510003ea9d49362400b8a362345150802c31/tokenizer.json\n", "loading file added_tokens.json from cache at None\n", "loading file special_tokens_map.json from cache at /root/.cache/huggingface/hub/models--unsloth--tinyllama-bnb-4bit/snapshots/fc56510003ea9d49362400b8a362345150802c31/special_tokens_map.json\n", "loading file tokenizer_config.json from cache at /root/.cache/huggingface/hub/models--unsloth--tinyllama-bnb-4bit/snapshots/fc56510003ea9d49362400b8a362345150802c31/tokenizer_config.json\n" ] }, { "output_type": "display_data", "data": { "text/plain": [ "Generating train split: 0 examples [00:00, ? examples/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "47b91d628c824007ba2a899908543fc4" } }, "metadata": {} }, { "output_type": "stream", "name": "stderr", "text": [ "Using auto half precision backend\n" ] } ], "source": [ "from trl import SFTTrainer\n", "from transformers import TrainingArguments\n", "from transformers.utils import logging\n", "logging.set_verbosity_info()\n", "\n", "trainer = SFTTrainer(\n", " model = model,\n", " tokenizer = tokenizer,\n", " train_dataset = dataset,\n", " dataset_text_field = \"text\",\n", " max_seq_length = max_seq_length,\n", " dataset_num_proc = 2,\n", " packing = True, # Packs short sequences together to save time!\n", " args = TrainingArguments(\n", " per_device_train_batch_size = 2,\n", " gradient_accumulation_steps = 4,\n", " warmup_ratio = 0.1,\n", " num_train_epochs = 1,\n", " learning_rate = 2e-5,\n", " fp16 = not torch.cuda.is_bf16_supported(),\n", " bf16 = torch.cuda.is_bf16_supported(),\n", " logging_steps = 1,\n", " optim = \"adamw_8bit\",\n", " weight_decay = 0.1,\n", " lr_scheduler_type = \"linear\",\n", " seed = 3407,\n", " output_dir = \"outputs\",\n", " ),\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "colab": { "base_uri": "https://localhost:8080/" }, "id": "2ejIt2xSNKKp", "outputId": "b58a5395-e0e3-43eb-b5dc-fd656d571a3d" }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "GPU = Tesla T4. Max memory = 14.748 GB.\n", "0.879 GB of memory reserved.\n" ] } ], "source": [ "#@title Show current memory stats\n", "gpu_stats = torch.cuda.get_device_properties(0)\n", "start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n", "max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3)\n", "print(f\"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.\")\n", "print(f\"{start_gpu_memory} GB of memory reserved.\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "id": "yqxqAZ7KJ4oL", "outputId": "985d469c-eff6-4d90-b0df-7e187efc0cf7" }, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "***** Running training *****\n", " Num examples = 3,000\n", " Num Epochs = 1\n", " Instantaneous batch size per device = 2\n", " Total train batch size (w. parallel, distributed & accumulation) = 8\n", " Gradient Accumulation steps = 4\n", " Total optimization steps = 375\n", " Number of trainable parameters = 25,231,360\n", "You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\n" ] }, { "output_type": "display_data", "data": { "text/plain": [ "Step | \n", "Training Loss | \n", "
---|---|
1 | \n", "2.280100 | \n", "
2 | \n", "2.243700 | \n", "
3 | \n", "2.245500 | \n", "
4 | \n", "2.243600 | \n", "
5 | \n", "2.302000 | \n", "
6 | \n", "2.287400 | \n", "
7 | \n", "2.283900 | \n", "
8 | \n", "2.144600 | \n", "
9 | \n", "2.237200 | \n", "
10 | \n", "2.245700 | \n", "
11 | \n", "2.286200 | \n", "
12 | \n", "2.281900 | \n", "
13 | \n", "2.204400 | \n", "
14 | \n", "2.158000 | \n", "
15 | \n", "2.278200 | \n", "
16 | \n", "2.168600 | \n", "
17 | \n", "2.147400 | \n", "
18 | \n", "2.119500 | \n", "
19 | \n", "2.122100 | \n", "
20 | \n", "2.099000 | \n", "
21 | \n", "2.072200 | \n", "
22 | \n", "2.066000 | \n", "
23 | \n", "2.073800 | \n", "
24 | \n", "2.000600 | \n", "
25 | \n", "2.043400 | \n", "
26 | \n", "1.992100 | \n", "
27 | \n", "2.020800 | \n", "
28 | \n", "1.972900 | \n", "
29 | \n", "1.984700 | \n", "
30 | \n", "1.991100 | \n", "
31 | \n", "1.973500 | \n", "
32 | \n", "1.964000 | \n", "
33 | \n", "1.896200 | \n", "
34 | \n", "1.872200 | \n", "
35 | \n", "1.862700 | \n", "
36 | \n", "1.855800 | \n", "
37 | \n", "1.792600 | \n", "
38 | \n", "1.849300 | \n", "
39 | \n", "1.745700 | \n", "
40 | \n", "1.734800 | \n", "
41 | \n", "1.646200 | \n", "
42 | \n", "1.701200 | \n", "
43 | \n", "1.705900 | \n", "
44 | \n", "1.702300 | \n", "
45 | \n", "1.650300 | \n", "
46 | \n", "1.601100 | \n", "
47 | \n", "1.639200 | \n", "
48 | \n", "1.633000 | \n", "
49 | \n", "1.612800 | \n", "
50 | \n", "1.638400 | \n", "
51 | \n", "1.593000 | \n", "
52 | \n", "1.549500 | \n", "
53 | \n", "1.500600 | \n", "
54 | \n", "1.511000 | \n", "
55 | \n", "1.489900 | \n", "
56 | \n", "1.526300 | \n", "
57 | \n", "1.447300 | \n", "
58 | \n", "1.454200 | \n", "
59 | \n", "1.454100 | \n", "
60 | \n", "1.482700 | \n", "
61 | \n", "1.403600 | \n", "
62 | \n", "1.416300 | \n", "
63 | \n", "1.422400 | \n", "
64 | \n", "1.380600 | \n", "
65 | \n", "1.425800 | \n", "
66 | \n", "1.379000 | \n", "
67 | \n", "1.404900 | \n", "
68 | \n", "1.415200 | \n", "
69 | \n", "1.362400 | \n", "
70 | \n", "1.368600 | \n", "
71 | \n", "1.412700 | \n", "
72 | \n", "1.388100 | \n", "
73 | \n", "1.352500 | \n", "
74 | \n", "1.365600 | \n", "
75 | \n", "1.332100 | \n", "
76 | \n", "1.363200 | \n", "
77 | \n", "1.379200 | \n", "
78 | \n", "1.328400 | \n", "
79 | \n", "1.289600 | \n", "
80 | \n", "1.340600 | \n", "
81 | \n", "1.325800 | \n", "
82 | \n", "1.277400 | \n", "
83 | \n", "1.294000 | \n", "
84 | \n", "1.292500 | \n", "
85 | \n", "1.285600 | \n", "
86 | \n", "1.272400 | \n", "
87 | \n", "1.223900 | \n", "
88 | \n", "1.296300 | \n", "
89 | \n", "1.313700 | \n", "
90 | \n", "1.285300 | \n", "
91 | \n", "1.290900 | \n", "
92 | \n", "1.232500 | \n", "
93 | \n", "1.242800 | \n", "
94 | \n", "1.240500 | \n", "
95 | \n", "1.227000 | \n", "
96 | \n", "1.198800 | \n", "
97 | \n", "1.224400 | \n", "
98 | \n", "1.271700 | \n", "
99 | \n", "1.205700 | \n", "
100 | \n", "1.251400 | \n", "
101 | \n", "1.207000 | \n", "
102 | \n", "1.249500 | \n", "
103 | \n", "1.225200 | \n", "
104 | \n", "1.228000 | \n", "
105 | \n", "1.191200 | \n", "
106 | \n", "1.255500 | \n", "
107 | \n", "1.194300 | \n", "
108 | \n", "1.184900 | \n", "
109 | \n", "1.182600 | \n", "
110 | \n", "1.191200 | \n", "
111 | \n", "1.250900 | \n", "
112 | \n", "1.213200 | \n", "
113 | \n", "1.146200 | \n", "
114 | \n", "1.177700 | \n", "
115 | \n", "1.217800 | \n", "
116 | \n", "1.245500 | \n", "
117 | \n", "1.154900 | \n", "
118 | \n", "1.205400 | \n", "
119 | \n", "1.155000 | \n", "
120 | \n", "1.176500 | \n", "
121 | \n", "1.152200 | \n", "
122 | \n", "1.203300 | \n", "
123 | \n", "1.194100 | \n", "
124 | \n", "1.222000 | \n", "
125 | \n", "1.153100 | \n", "
126 | \n", "1.172800 | \n", "
127 | \n", "1.191600 | \n", "
128 | \n", "1.215200 | \n", "
129 | \n", "1.207100 | \n", "
130 | \n", "1.200800 | \n", "
131 | \n", "1.177500 | \n", "
132 | \n", "1.140600 | \n", "
133 | \n", "1.141500 | \n", "
134 | \n", "1.146600 | \n", "
135 | \n", "1.122800 | \n", "
136 | \n", "1.152900 | \n", "
137 | \n", "1.190700 | \n", "
138 | \n", "1.154700 | \n", "
139 | \n", "1.183800 | \n", "
140 | \n", "1.160500 | \n", "
141 | \n", "1.096000 | \n", "
142 | \n", "1.124700 | \n", "
143 | \n", "1.121000 | \n", "
144 | \n", "1.182000 | \n", "
145 | \n", "1.144500 | \n", "
146 | \n", "1.182200 | \n", "
147 | \n", "1.151000 | \n", "
148 | \n", "1.152600 | \n", "
149 | \n", "1.224500 | \n", "
150 | \n", "1.116600 | \n", "
151 | \n", "1.149500 | \n", "
152 | \n", "1.162200 | \n", "
153 | \n", "1.099100 | \n", "
154 | \n", "1.119100 | \n", "
155 | \n", "1.142200 | \n", "
156 | \n", "1.188800 | \n", "
157 | \n", "1.135000 | \n", "
158 | \n", "1.159000 | \n", "
159 | \n", "1.125200 | \n", "
160 | \n", "1.183500 | \n", "
161 | \n", "1.123200 | \n", "
162 | \n", "1.139300 | \n", "
163 | \n", "1.129700 | \n", "
164 | \n", "1.111700 | \n", "
165 | \n", "1.093400 | \n", "
166 | \n", "1.139300 | \n", "
167 | \n", "1.125600 | \n", "
168 | \n", "1.100800 | \n", "
169 | \n", "1.137200 | \n", "
170 | \n", "1.087700 | \n", "
171 | \n", "1.052200 | \n", "
172 | \n", "1.153600 | \n", "
173 | \n", "1.132200 | \n", "
174 | \n", "1.127100 | \n", "
175 | \n", "1.125800 | \n", "
176 | \n", "1.120600 | \n", "
177 | \n", "1.123200 | \n", "
178 | \n", "1.148700 | \n", "
179 | \n", "1.128300 | \n", "
180 | \n", "1.154800 | \n", "
181 | \n", "1.101900 | \n", "
182 | \n", "1.150900 | \n", "
183 | \n", "1.085300 | \n", "
184 | \n", "1.152900 | \n", "
185 | \n", "1.141800 | \n", "
186 | \n", "1.090200 | \n", "
187 | \n", "1.167600 | \n", "
188 | \n", "1.109800 | \n", "
189 | \n", "1.059100 | \n", "
190 | \n", "1.071300 | \n", "
191 | \n", "1.111100 | \n", "
192 | \n", "1.146600 | \n", "
193 | \n", "1.125800 | \n", "
194 | \n", "1.082200 | \n", "
195 | \n", "1.112300 | \n", "
196 | \n", "1.159800 | \n", "
197 | \n", "1.097600 | \n", "
198 | \n", "1.107900 | \n", "
199 | \n", "1.114800 | \n", "
200 | \n", "1.103600 | \n", "
201 | \n", "1.082200 | \n", "
202 | \n", "1.080200 | \n", "
203 | \n", "1.103900 | \n", "
204 | \n", "1.120600 | \n", "
205 | \n", "1.106400 | \n", "
206 | \n", "1.123900 | \n", "
207 | \n", "1.118700 | \n", "
208 | \n", "1.070800 | \n", "
209 | \n", "1.096500 | \n", "
210 | \n", "1.107800 | \n", "
211 | \n", "1.084000 | \n", "
212 | \n", "1.138900 | \n", "
213 | \n", "1.082100 | \n", "
214 | \n", "1.101900 | \n", "
215 | \n", "1.080700 | \n", "
216 | \n", "1.124300 | \n", "
217 | \n", "1.082500 | \n", "
218 | \n", "1.098300 | \n", "
219 | \n", "1.089400 | \n", "
220 | \n", "1.090600 | \n", "
221 | \n", "1.109700 | \n", "
222 | \n", "1.084900 | \n", "
223 | \n", "1.062900 | \n", "
224 | \n", "1.090400 | \n", "
225 | \n", "1.119900 | \n", "
226 | \n", "1.122600 | \n", "
227 | \n", "1.106500 | \n", "
228 | \n", "1.068300 | \n", "
229 | \n", "1.148000 | \n", "
230 | \n", "1.120300 | \n", "
231 | \n", "1.051000 | \n", "
232 | \n", "1.115600 | \n", "
233 | \n", "1.070800 | \n", "
234 | \n", "1.124800 | \n", "
235 | \n", "1.071500 | \n", "
236 | \n", "1.083000 | \n", "
237 | \n", "1.081800 | \n", "
238 | \n", "1.045200 | \n", "
239 | \n", "1.127600 | \n", "
240 | \n", "1.120100 | \n", "
241 | \n", "1.089800 | \n", "
242 | \n", "1.173400 | \n", "
243 | \n", "1.100400 | \n", "
244 | \n", "1.107100 | \n", "
245 | \n", "1.121700 | \n", "
246 | \n", "1.037800 | \n", "
247 | \n", "1.103900 | \n", "
248 | \n", "1.112700 | \n", "
249 | \n", "1.134600 | \n", "
250 | \n", "1.108800 | \n", "
251 | \n", "1.100000 | \n", "
252 | \n", "1.056000 | \n", "
253 | \n", "1.089300 | \n", "
254 | \n", "1.078600 | \n", "
255 | \n", "1.084300 | \n", "
256 | \n", "1.073000 | \n", "
257 | \n", "1.062500 | \n", "
258 | \n", "1.110800 | \n", "
259 | \n", "1.074600 | \n", "
260 | \n", "1.151400 | \n", "
261 | \n", "1.104700 | \n", "
262 | \n", "1.099700 | \n", "
263 | \n", "1.084100 | \n", "
264 | \n", "1.077900 | \n", "
265 | \n", "1.078400 | \n", "
266 | \n", "1.046300 | \n", "
267 | \n", "1.042800 | \n", "
268 | \n", "1.090200 | \n", "
269 | \n", "1.042600 | \n", "
270 | \n", "1.112700 | \n", "
271 | \n", "1.115300 | \n", "
272 | \n", "1.125800 | \n", "
273 | \n", "1.092600 | \n", "
274 | \n", "1.115700 | \n", "
275 | \n", "1.101700 | \n", "
276 | \n", "1.081200 | \n", "
277 | \n", "1.094400 | \n", "
278 | \n", "1.057900 | \n", "
279 | \n", "1.060400 | \n", "
280 | \n", "1.133200 | \n", "
281 | \n", "1.053900 | \n", "
282 | \n", "1.102300 | \n", "
283 | \n", "1.075500 | \n", "
284 | \n", "1.115100 | \n", "
285 | \n", "1.029300 | \n", "
286 | \n", "1.038500 | \n", "
287 | \n", "1.055000 | \n", "
288 | \n", "1.110800 | \n", "
289 | \n", "1.116200 | \n", "
290 | \n", "1.050500 | \n", "
291 | \n", "1.073800 | \n", "
292 | \n", "1.075900 | \n", "
293 | \n", "1.131600 | \n", "
294 | \n", "1.141400 | \n", "
295 | \n", "1.140300 | \n", "
296 | \n", "1.096600 | \n", "
297 | \n", "1.079900 | \n", "
298 | \n", "1.061700 | \n", "
299 | \n", "1.091000 | \n", "
300 | \n", "1.075700 | \n", "
301 | \n", "1.123300 | \n", "
302 | \n", "1.098200 | \n", "
303 | \n", "1.118200 | \n", "
304 | \n", "1.089600 | \n", "
305 | \n", "1.102400 | \n", "
306 | \n", "1.078900 | \n", "
307 | \n", "1.074900 | \n", "
308 | \n", "1.037700 | \n", "
309 | \n", "1.069100 | \n", "
310 | \n", "1.071500 | \n", "
311 | \n", "1.088100 | \n", "
312 | \n", "1.074900 | \n", "
313 | \n", "1.101600 | \n", "
314 | \n", "1.097200 | \n", "
315 | \n", "1.041500 | \n", "
316 | \n", "1.117300 | \n", "
317 | \n", "1.065400 | \n", "
318 | \n", "1.090700 | \n", "
319 | \n", "1.095100 | \n", "
320 | \n", "1.108600 | \n", "
321 | \n", "1.089400 | \n", "
322 | \n", "1.083800 | \n", "
323 | \n", "1.059400 | \n", "
324 | \n", "1.091200 | \n", "
325 | \n", "1.063700 | \n", "
326 | \n", "1.045400 | \n", "
327 | \n", "1.042900 | \n", "
328 | \n", "1.136300 | \n", "
329 | \n", "1.086600 | \n", "
330 | \n", "1.067300 | \n", "
331 | \n", "1.053600 | \n", "
332 | \n", "1.096500 | \n", "
333 | \n", "1.104300 | \n", "
334 | \n", "1.040800 | \n", "
335 | \n", "1.090800 | \n", "
336 | \n", "1.069800 | \n", "
337 | \n", "1.025400 | \n", "
338 | \n", "1.030300 | \n", "
339 | \n", "1.049500 | \n", "
340 | \n", "1.081800 | \n", "
341 | \n", "1.090200 | \n", "
342 | \n", "1.065400 | \n", "
343 | \n", "1.068600 | \n", "
344 | \n", "1.111300 | \n", "
345 | \n", "1.091700 | \n", "
346 | \n", "1.062700 | \n", "
347 | \n", "1.090900 | \n", "
348 | \n", "1.099900 | \n", "
349 | \n", "1.037900 | \n", "
350 | \n", "1.108200 | \n", "
351 | \n", "1.102100 | \n", "
352 | \n", "1.085800 | \n", "
353 | \n", "1.054900 | \n", "
354 | \n", "1.087700 | \n", "
355 | \n", "1.103500 | \n", "
356 | \n", "1.063600 | \n", "
357 | \n", "1.080600 | \n", "
358 | \n", "1.095600 | \n", "
359 | \n", "1.054600 | \n", "
360 | \n", "1.056100 | \n", "
361 | \n", "1.093400 | \n", "
362 | \n", "1.044400 | \n", "
363 | \n", "1.052700 | \n", "
364 | \n", "1.051300 | \n", "
365 | \n", "1.060500 | \n", "
366 | \n", "1.063400 | \n", "
367 | \n", "1.044900 | \n", "
368 | \n", "1.108100 | \n", "
369 | \n", "1.074500 | \n", "
370 | \n", "1.038300 | \n", "
371 | \n", "1.047300 | \n", "
372 | \n", "1.072900 | \n", "
373 | \n", "1.064600 | \n", "
374 | \n", "1.104100 | \n", "
375 | \n", "1.074300 | \n", "
"
]
},
"metadata": {}
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"\n",
"\n",
"Training completed. Do not forget to share your model on huggingface.co/models =)\n",
"\n",
"\n"
]
}
],
"source": [
"trainer_stats = trainer.train()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "pCqnaKmlO1U9",
"outputId": "ecb21ea9-b0f5-48ab-9145-5baedb68a203"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"5034.6413 seconds used for training.\n",
"83.91 minutes used for training.\n",
"Peak reserved memory = 13.508 GB.\n",
"Peak reserved memory for training = 12.629 GB.\n",
"Peak reserved memory % of max memory = 91.592 %.\n",
"Peak reserved memory for training % of max memory = 85.632 %.\n"
]
}
],
"source": [
"#@title Show final memory and time stats\n",
"used_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n",
"used_memory_for_lora = round(used_memory - start_gpu_memory, 3)\n",
"used_percentage = round(used_memory /max_memory*100, 3)\n",
"lora_percentage = round(used_memory_for_lora/max_memory*100, 3)\n",
"print(f\"{trainer_stats.metrics['train_runtime']} seconds used for training.\")\n",
"print(f\"{round(trainer_stats.metrics['train_runtime']/60, 2)} minutes used for training.\")\n",
"print(f\"Peak reserved memory = {used_memory} GB.\")\n",
"print(f\"Peak reserved memory for training = {used_memory_for_lora} GB.\")\n",
"print(f\"Peak reserved memory % of max memory = {used_percentage} %.\")\n",
"print(f\"Peak reserved memory for training % of max memory = {lora_percentage} %.\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ekOmTR1hSNcr"
},
"source": [
"\n",
"### Inference\n",
"Let's run the model! You can change the instruction and input - leave the output blank!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "kR3gIAX-SM2q",
"outputId": "2283bb0c-d568-4c92-8352-094ba12eaf06"
},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"[' Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\\n\\n### Instruction:\\nContinue the fibonnaci sequence.\\n\\n### Input:\\n1, 1, 2, 3, 5, 8\\n\\n### Response:\\nThe fibonacci sequence is a sequence of numbers that can be generated by adding the previous two numbers and then subtracting the previous number from the previous number. The first number in the sequence is 1, and the second number is 1. The third number in the sequence is 1 + 1 = 2, the fourth number is 1 + 2 = 3, and so on. The sequence can be continued by adding 1 + 1 = 2, 1 + 2 = 3, and so on.']"
]
},
"metadata": {},
"execution_count": 9
}
],
"source": [
"# alpaca_prompt = Copied from above\n",
"FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
"inputs = tokenizer(\n",
"[\n",
" alpaca_prompt.format(\n",
" \"Continue the fibonnaci sequence.\", # instruction\n",
" \"1, 1, 2, 3, 5, 8\", # input\n",
" \"\", # output - leave this blank for generation!\n",
" )\n",
"], return_tensors = \"pt\").to(\"cuda\")\n",
"\n",
"outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)\n",
"tokenizer.batch_decode(outputs)"
]
},
{
"cell_type": "markdown",
"source": [
" You can also use a `TextStreamer` for continuous inference - so you can see the generation token by token, instead of waiting the whole time!"
],
"metadata": {
"id": "V2otZJcevdpZ"
}
},
{
"cell_type": "code",
"source": [
"# alpaca_prompt = Copied from above\n",
"FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
"inputs = tokenizer(\n",
"[\n",
" alpaca_prompt.format(\n",
" \"Continue the fibonnaci sequence.\", # instruction\n",
" \"1, 1, 2, 3, 5, 8\", # input\n",
" \"\", # output - leave this blank for generation!\n",
" )\n",
"], return_tensors = \"pt\").to(\"cuda\")\n",
"\n",
"from transformers import TextStreamer\n",
"text_streamer = TextStreamer(tokenizer)\n",
"_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)"
],
"metadata": {
"id": "QYvyvuj5vd7H"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "uMuVrWbjAzhc"
},
"source": [
"\n",
"### Saving, loading finetuned models\n",
"To save the final model as LoRA adapters, either use Huggingface's `push_to_hub` for an online save or `save_pretrained` for a local save.\n",
"\n",
"**[NOTE]** This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "upcOlWe7A1vc"
},
"outputs": [],
"source": [
"model.save_pretrained(\"lora_model\") # Local saving\n",
"# model.push_to_hub(\"your_name/lora_model\", token = \"...\") # Online saving"
]
},
{
"cell_type": "markdown",
"source": [
"Now if you want to load the LoRA adapters we just saved for inference, set `False` to `True`:"
],
"metadata": {
"id": "3CgqR2B0vmCt"
}
},
{
"cell_type": "code",
"source": [
"if False:\n",
" from unsloth import FastLanguageModel\n",
" model, tokenizer = FastLanguageModel.from_pretrained(\n",
" model_name = \"lora_model\", # YOUR MODEL YOU USED FOR TRAINING\n",
" max_seq_length = max_seq_length,\n",
" dtype = dtype,\n",
" load_in_4bit = load_in_4bit,\n",
" )\n",
" FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
"\n",
"# alpaca_prompt = You MUST copy from above!\n",
"\n",
"inputs = tokenizer(\n",
"[\n",
" alpaca_prompt.format(\n",
" \"What is a famous tall tower in Paris?\", # instruction\n",
" \"\", # input\n",
" \"\", # output - leave this blank for generation!\n",
" )\n",
"], return_tensors = \"pt\").to(\"cuda\")\n",
"\n",
"from transformers import TextStreamer\n",
"text_streamer = TextStreamer(tokenizer)\n",
"_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 64)"
],
"metadata": {
"id": "Yle1gGB3vmWK"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"You can also use Hugging Face's `AutoModelForPeftCausalLM`. Only use this if you do not have `unsloth` installed. It can be hopelessly slow, since `4bit` model downloading is not supported, and Unsloth's **inference is 2x faster**."
],
"metadata": {
"id": "8m76iItmvni0"
}
},
{
"cell_type": "code",
"source": [
"if False:\n",
" # I highly do NOT suggest - use Unsloth if possible\n",
" from peft import AutoPeftModelForCausalLM\n",
" from transformers import AutoTokenizer\n",
" model = AutoPeftModelForCausalLM.from_pretrained(\n",
" \"lora_model\", # YOUR MODEL YOU USED FOR TRAINING\n",
" load_in_4bit = load_in_4bit,\n",
" )\n",
" tokenizer = AutoTokenizer.from_pretrained(\"lora_model\")"
],
"metadata": {
"id": "wcMqKxzcvouj"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Saving to float16 for VLLM\n",
"\n",
"We also support saving to `float16` directly. Select `merged_16bit` for float16 or `merged_4bit` for int4. We also allow `lora` adapters as a fallback. Use `push_to_hub_merged` to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens."
],
"metadata": {
"id": "xwCTbEUavpoC"
}
},
{
"cell_type": "code",
"source": [
"# Merge to 16bit\n",
"if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"merged_16bit\",)\n",
"if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_16bit\", token = \"\")\n",
"\n",
"# Merge to 4bit\n",
"if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"merged_4bit\",)\n",
"if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_4bit\", token = \"\")\n",
"\n",
"# Just LoRA adapters\n",
"if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"lora\",)\n",
"if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"lora\", token = \"\")"
],
"metadata": {
"id": "gJKx0osWvqzz"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### GGUF / llama.cpp Conversion\n",
"To save to `GGUF` / `llama.cpp`, we support it natively now! We clone `llama.cpp` and we default save it to `q8_0`. We allow all methods like `q4_k_m`. Use `save_pretrained_gguf` for local saving and `push_to_hub_gguf` for uploading to HF.\n",
"\n",
"Some supported quant methods (full list on our [Wiki page](https://github.com/unslothai/unsloth/wiki#gguf-quantization-options)):\n",
"* `q8_0` - Fast conversion. High resource use, but generally acceptable.\n",
"* `q4_k_m` - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.\n",
"* `q5_k_m` - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K."
],
"metadata": {
"id": "mhc9u6HAvr3b"
}
},
{
"cell_type": "code",
"source": [
"# Save to 8bit Q8_0\n",
"if False: model.save_pretrained_gguf(\"model\", tokenizer,)\n",
"if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, token = \"\")\n",
"\n",
"# Save to 16bit GGUF\n",
"if False: model.save_pretrained_gguf(\"model\", tokenizer, quantization_method = \"f16\")\n",
"if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, quantization_method = \"f16\", token = \"\")\n",
"\n",
"# Save to q4_k_m GGUF\n",
"if False: model.save_pretrained_gguf(\"model\", tokenizer, quantization_method = \"q4_k_m\")\n",
"if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, quantization_method = \"q4_k_m\", token = \"\")"
],
"metadata": {
"id": "2_TmxAoavvYW"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now, use the `model-unsloth.gguf` file or `model-unsloth-Q4_K_M.gguf` file in `llama.cpp` or a UI based system like `GPT4All`. You can install GPT4All by going [here](https://gpt4all.io/index.html)."
],
"metadata": {
"id": "SvFui8YuvZ1R"
}
},
{
"cell_type": "markdown",
"metadata": {
"id": "Zt9CHJqO6p30"
},
"source": [
"And we're done! If you have any questions on Unsloth, we have a [Discord](https://discord.gg/u54VK8m8tk) channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!\n",
"\n",
"Some other links:\n",
"1. Zephyr DPO 2x faster [free Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing)\n",
"2. Mistral 7b 2x faster [free Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing)\n",
"3. Llama 7b 2x faster [free Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing)\n",
"4. CodeLlama 34b 2x faster [A100 on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing)\n",
"5. Mistral 7b [free Kaggle version](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook)\n",
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
"8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
"\n",
"