{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "IqM-T1RTzY6C" }, "source": [ "To run this, press \"*Runtime*\" and press \"*Run all*\" on a **free** Tesla T4 Google Colab instance!\n", "
\n", "\n", "To install Unsloth on your own computer, follow the installation instructions on our Github page [here](https://github.com/unslothai/unsloth?tab=readme-ov-file#-installation-instructions).\n", "\n", "You will learn how to do [data prep](#Data), how to [train](#Train), how to [run the model](#Inference), & [how to save it](#Save) (eg for Llama.cpp).\n", "\n", "**[NEW] Try 2x faster inference in a free Colab for Llama-3.1 8b Instruct [here](https://colab.research.google.com/drive/1T-YBVfnphoVc8E2E854qF3jdia2Ll2W2?usp=sharing)**\n", "\n", "**[NEW] Finetuning Mistral Small 22b fits in a 16GB GPU!**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2eSvM9zX_2d3" }, "outputs": [], "source": [ "%%capture\n", "!pip install unsloth\n", "# Also get the latest nightly Unsloth!\n", "!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"" ] }, { "cell_type": "markdown", "metadata": { "id": "r2v_X2fA0Df5" }, "source": [ "* We support Llama, Mistral, Phi-3, Gemma, Yi, DeepSeek, Qwen, TinyLlama, Vicuna, Open Hermes etc\n", "* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n", "* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n", "* [**NEW**] We make Gemma-2 9b / 27b **2x faster**! See our [Gemma-2 9b notebook](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing)\n", "* [**NEW**] To finetune and auto export to Ollama, try our [Ollama notebook](https://colab.research.google.com/drive/1WZDi7APtQ9VsvOrQSSC5DDtxq159j8iZ?usp=sharing)\n", "* [**NEW**] We make Mistral NeMo 12B 2x faster and fit in under 12GB of VRAM! [Mistral NeMo notebook](https://colab.research.google.com/drive/17d3U-CAIwzmbDRqbZ9NnpHxCkmXB6LZ0?usp=sharing)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 403, "referenced_widgets": [ "51f1e28d282645a58c8b783f4f60cfc2", "7ca5730f63b4420cab8b124a17aaeb27", "64747677b60c489ebd3e769272533b3f", "5c5c498bd046409c860372110e523c7c", "ae087aee8e96402681d22d892d6cd476", "f340840186b4458dbff47afe987f1f59", "723b04a16f3a402589fdb9463834f3d1", "e50f33072d6d454b98c6607f9e847401", "3f97e7bd5fff420c80615d676e1648f0", "e7f6f37bf5b6483788e50c112b7ef007", "aa566decf6b4412caa2988fa623900ea", "b90fe47ddb6240bd90ddd1705a9f3fc9", "ca20885d9adc4815b5418073a7930f8f", "95aa990d762d428d93cef2834fb86c8a", "c2d228de02d14c6c8f780048b1ccc088", "c56ce88b9fab4475af5fafbc7a845010", "3e94165c4ab0471db8fb1fbd5b5bac0d", "74c3cbb850e44a4c9eb8283080ba075e", "20d356533da04942856986a33e7a99fb", "3cdf1d5b878b41838f0ec2b4d877e97a", "a2cde30a94d3462488bcb33693e3e274", "f04ed92a356c489a9877f82b05bb330f", "fe0cef5f02ca4e5e95b06356b8286fbe", "6db31893e3f84043b5abc6a24bac8228", "fd5eccb2370b40b58eea5c9f0d868e36", "90762bb3d5fd4f4db67f3a8a11434689", "004f3ec8f7a545c4bc54484dcb3022bb", "a6500ce74ca54e0ca650851502b14644", "de38fc3f3df348f29528d9acd6b9d981", "a3c7c2459ad14e9c81d7422d7e83393f", "fdd24808ed23442998104b5b28370aa6", "2e6a26fb12084d5487525f5e78ab5ac8", "8e082fef631b4eeab73c02e181f5690c", "b57e30ad94fa4b739d32c9553f5aee29", "4677d087bf6b40a3a4915ac7481f6e8d", "90c9301c342846729db7e3c6dfe5b849", "c7977b2008c2476596c5351012e710b6", "dcb2b3f1102a44429e62828b99ed39ab", "673da437f86a4371b7e3913a66de835a", "358035cd9f6943aeadc4cba1964109a6", "edb08520684f4f83a7094599ed55cb37", "c2185ae3f8aa4f3488e0bd7257664e26", "71f71606c101414bae187de7f145ea43", "bf1ea3ec39db442d91f74fdcfd1c5ac3", "62a6bc239405496ca1e451fbda8787f3", "7af5367620b64131a6fef8c2864d0d28", "d9f230d8474c40fc995c67c4f1eeb86a", "596156e7bb7346c1808ed960997a5159", "418ece30091b4d66aca4df6367e0bec5", "b1610596162844658f4ac1893f0fdd40", "87ccf938ee7641acb94c6050bb7c4b20", "d1462aa795714430bfee51674a619527", "cd98cb1f265448cf90adfd4fb3362b0d", "780d3c81c8e2461694df4d515d381d9d", "f5b0174aa23e432896d0dfe37387036b", "a80caed5f8af41ca99576d9daa68c6f6", "262070892253448793aba4d048f40c08", "e5486d352f314f45b663a6472d6ff885", "f033347d7cdb4f38a3eb3e05f546e438", "d1549b76e8ff4d69b17f9a0831b43551", "45d4f27475294750aff2487353c8105e", "3926bab2dbad4e5fb4362ee96d6fdd67", "e885fb98968949589006001c2f84a8eb", "1fa73eafabb14c73aaee39354c62477f", "2279b927aab74513aa1f6efb2c66c426", "b44759c58b284a5a950350a2cf82c4e6" ] }, "id": "QmUBVEnvCDJv", "outputId": "55acd488-9a43-4d68-8b55-0e3061ff247f" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.\n", "==((====))== Unsloth 2024.10.7: Fast Gemma2 patching. Transformers = 4.44.2.\n", " \\\\ /| GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux.\n", "O^O/ \\_/ \\ Pytorch: 2.5.0+cu121. CUDA = 7.5. CUDA Toolkit = 12.1.\n", "\\ / Bfloat16 = FALSE. FA [Xformers = 0.0.28.post2. FA2 = False]\n", " \"-____-\" Free Apache license: http://github.com/unslothai/unsloth\n", "Unsloth: Fast downloading is enabled - ignore downloading bars which are red colored!\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "51f1e28d282645a58c8b783f4f60cfc2", "version_major": 2, "version_minor": 0 }, "text/plain": [ "model.safetensors: 0%| | 0.00/6.13G [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "b90fe47ddb6240bd90ddd1705a9f3fc9", "version_major": 2, "version_minor": 0 }, "text/plain": [ "generation_config.json: 0%| | 0.00/190 [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "fe0cef5f02ca4e5e95b06356b8286fbe", "version_major": 2, "version_minor": 0 }, "text/plain": [ "tokenizer_config.json: 0%| | 0.00/46.4k [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "b57e30ad94fa4b739d32c9553f5aee29", "version_major": 2, "version_minor": 0 }, "text/plain": [ "tokenizer.model: 0%| | 0.00/4.24M [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "62a6bc239405496ca1e451fbda8787f3", "version_major": 2, "version_minor": 0 }, "text/plain": [ "special_tokens_map.json: 0%| | 0.00/636 [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "a80caed5f8af41ca99576d9daa68c6f6", "version_major": 2, "version_minor": 0 }, "text/plain": [ "tokenizer.json: 0%| | 0.00/17.5M [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "Unsloth: We fixed a gradient accumulation bug, but it seems like you don't have the latest transformers version!\n", "Please update transformers, TRL and unsloth via:\n", "`pip install --upgrade --no-cache-dir unsloth git+https://github.com/huggingface/transformers.git git+https://github.com/huggingface/trl.git`\n" ] } ], "source": [ "from unsloth import FastLanguageModel\n", "import torch\n", "max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!\n", "dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+\n", "load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.\n", "\n", "model, tokenizer = FastLanguageModel.from_pretrained(\n", " model_name = \"unsloth/gemma-2-9b\",\n", " max_seq_length = max_seq_length,\n", " dtype = dtype,\n", " load_in_4bit = load_in_4bit,\n", " token = \"hf_\",\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "SXd9bTZd1aaL" }, "source": [ "We now add LoRA adapters so we only need to update 1 to 10% of all parameters!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "6bZsfBuZDeCL", "outputId": "083cf8e8-fb6f-4209-b76e-d36ac8af7cae" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Unsloth 2024.10.7 patched 42 layers with 42 QKV layers, 42 O layers and 42 MLP layers.\n" ] } ], "source": [ "model = FastLanguageModel.get_peft_model(\n", " model,\n", " r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128\n", " target_modules = [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\n", " \"gate_proj\", \"up_proj\", \"down_proj\",],\n", " lora_alpha = 16,\n", " lora_dropout = 0, # Supports any, but = 0 is optimized\n", " bias = \"none\", # Supports any, but = \"none\" is optimized\n", " # [NEW] \"unsloth\" uses 30% less VRAM, fits 2x larger batch sizes!\n", " use_gradient_checkpointing = \"unsloth\", # True or \"unsloth\" for very long context\n", " random_state = 3407,\n", " use_rslora = False, # We support rank stabilized LoRA\n", " loftq_config = None, # And LoftQ\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "vITh0KVJ10qX" }, "source": [ "\n", "### Data Prep\n", "We now use the Alpaca dataset from [yahma](https://huggingface.co/datasets/yahma/alpaca-cleaned), which is a filtered version of 52K of the original [Alpaca dataset](https://crfm.stanford.edu/2023/03/13/alpaca.html). You can replace this code section with your own data prep.\n", "\n", "**[NOTE]** To train only on completions (ignoring the user's input) read TRL's docs [here](https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only).\n", "\n", "**[NOTE]** Remember to add the **EOS_TOKEN** to the tokenized output!! Otherwise you'll get infinite generations!\n", "\n", "If you want to use the `mistral3` template for ShareGPT datasets, try our conversational [notebook](https://colab.research.google.com/drive/1XamvWYinY6FOSX9GLvnqSjjsNflxdhNc?usp=sharing).\n", "\n", "For text completions like novel writing, try this [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 177, "referenced_widgets": [ "3fddd29878ba408db098fb05db157710", "b542e0276854449d9ec4bed67279d037", "c3143dca63c6445fb7aa06d7d764d7a9", "e5c89397eb1a42bb895a4c540db2df1c", "ae243226e393499bac22c08d2b3d9570", "265fecfcffb44db580c066a04b5ea37b", "81b34a02e519484c965524acbe252807", "9179bf6bb49f477b9d9e5eb2f8015aaa", "d6dff4e305aa46fbaa9375133356378a", "e2844ec735d4420391b8ed1b9a932949", "d1ebb90d4a8e4656941f47d644013204", "902ea19651c546de8c19414daf6a053a", "4e40665bc93a414c87c089a3a0bb4008", "854179dbb6854ac5bb3d7240dcc3cb0b", "7b64ff6c0203472391e90ce30ed4165f", "17c55c44d870452c81d902b74c8cce79", "2b0e9c589f9848f2aeec3a97dacf2dc5", "a62221e4825a48caa9ef7f906fd43748", "f9ee25b240f74c21adfa24ce54659efd", "5b795861641e488fb6a47f88860a9ccd", "1f6d972a105a46438f51566f5a24cf84", "7c86fc2b3b0d4708bc2e55801894e37e", "f8a68ea30ebd4251931cf4d7b5be62a9", "f2f000db73f34f468b1c549c8422743a", "1ce1838f9eb34f74a615ad82cab78274", "0afe6c4f57a643159bc51aa36f099f61", "cc0bc8033830406a942b67c4cbbc5d28", "0bf1d81abe6f4a3493c29810857fc8dd", "f31165c434c7427fb4d26ea2af0feda9", "0a4506749df0400480090d3127285ed6", "641f767ab42e4190bdd3d0abfe851301", "e0211c2ff4fb46aaa85bf681c004a04c", "a593571b38dd4fd696e3d4778d2a9f03", "8c398b847644448e9d75135f3200f156", "63f16ab5e462454d88927b21df4427aa", "afb69e965b33472a8de2739c1cdff1e9", "5455311519604e9993d537555f372a0b", "a398a05038394c7b853237d751a0bbdd", "68b2f241810746b7973e2b94ba4c0122", "e4fd6646d36e4bce817e1e28dd99dc51", "c6ede16c623b49c7b4916e5ce4799125", "9a6fe19da592481bbe762912bc45bbed", "9439a307ffdd4705b7a1affb46d0fb71", "6a62779572c6495fb2594270742b6e58", "871336e6e4134fb28bd3b2fa606059cc", "0a8ca1638fe248d0856fd4c385f9a70b", "93a7d487abb0476383c7e57a3da1f851", "372011973a8e46a2888bf4299b042aa0", "2d5c3406ed7e4e03af04711751debd71", "afc36d622583404b942709b58027ffd2", "8b74367efcea4a50be0c0b205dc1dd47", "97d31d1c17d248f3b2dffe59143a9797", "c7fbd851c32746d3a2a0e69b411b2121", "cc0cb8826ef3428389ed6dfff6717d95", "57f60ec03bf14970b020302f317ea97a" ] }, "id": "LjY75GoYUCB8", "outputId": "062127be-de34-40cb-b112-d799a1873d64" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "3fddd29878ba408db098fb05db157710", "version_major": 2, "version_minor": 0 }, "text/plain": [ "README.md: 0%| | 0.00/450 [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "902ea19651c546de8c19414daf6a053a", "version_major": 2, "version_minor": 0 }, "text/plain": [ "train-00000-of-00002.parquet: 0%| | 0.00/158M [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "f8a68ea30ebd4251931cf4d7b5be62a9", "version_major": 2, "version_minor": 0 }, "text/plain": [ "train-00001-of-00002.parquet: 0%| | 0.00/144M [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "8c398b847644448e9d75135f3200f156", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Generating train split: 0%| | 0/172026 [00:00, ? examples/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "871336e6e4134fb28bd3b2fa606059cc", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Map: 0%| | 0/172026 [00:00, ? examples/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "alpaca_prompt = \"\"\"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n", "\n", "### Instruction:\n", "{}\n", "\n", "### Input:\n", "{}\n", "\n", "### Response:\n", "{}\"\"\"\n", "\n", "EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN\n", "def formatting_prompts_func(examples):\n", " instructions = examples[\"instruction\"]\n", " inputs = examples[\"input\"]\n", " outputs = examples[\"output\"]\n", " texts = []\n", " for instruction, input, output in zip(instructions, inputs, outputs):\n", " # Must add EOS_TOKEN, otherwise your generation will go on forever!\n", " text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN\n", " texts.append(text)\n", " return { \"text\" : texts, }\n", "pass\n", "\n", "from datasets import load_dataset\n", "dataset = load_dataset(\"BanglaLLM/bangla-alpaca-orca\", split = \"train\")\n", "dataset = dataset.map(formatting_prompts_func, batched = True,)" ] }, { "cell_type": "markdown", "metadata": { "id": "idAEIeSQ3xdS" }, "source": [ "\n", "### Train the model\n", "Now let's use Huggingface TRL's `SFTTrainer`! More docs here: [TRL SFT docs](https://huggingface.co/docs/trl/sft_trainer). We do 60 steps to speed things up, but you can set `num_train_epochs=1` for a full run, and turn off `max_steps=None`. We also support TRL's `DPOTrainer`!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": [ "dd252c6d1d59418aa0f5b7c469351dee", "1949ce4af8c94c0ba7e3ac9d8df6b332", "e096cf56562a4e7281681be173d51b09", "89202b77af2a47b196cc8723c846e891", "3cd94a9a96894e51a652076762478155", "bac3b0bec13d492b86a5c65a0bb5b96f", "8adaf5cc36a3456094e077eca79c8b7e", "98cc2108542f443eb242a45fc671afef", "90620de3bb6a467d92b92622e5dfb0c5", "6f7ed19f4b77411c88223d59fa50d13a", "9d319b570bd64f0e9176817d577bc020" ] }, "id": "95_Nn-89DhsL", "outputId": "d644b905-6f99-42e2-8539-1bf9173a04bd" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "dd252c6d1d59418aa0f5b7c469351dee", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Map (num_proc=2): 0%| | 0/172026 [00:00, ? examples/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "max_steps is given, it will override any value given in num_train_epochs\n" ] } ], "source": [ "from trl import SFTTrainer\n", "from transformers import TrainingArguments\n", "from unsloth import is_bfloat16_supported\n", "\n", "trainer = SFTTrainer(\n", " model = model,\n", " tokenizer = tokenizer,\n", " train_dataset = dataset,\n", " dataset_text_field = \"text\",\n", " max_seq_length = max_seq_length,\n", " dataset_num_proc = 2,\n", " packing = False, # Can make training 5x faster for short sequences.\n", " args = TrainingArguments(\n", " per_device_train_batch_size = 1,\n", " gradient_accumulation_steps = 4,\n", " warmup_steps = 5,\n", " # num_train_epochs = 1, # Set this for 1 full training run.\n", " max_steps = 200,\n", " learning_rate = 2e-4,\n", " fp16 = not is_bfloat16_supported(),\n", " bf16 = is_bfloat16_supported(),\n", " logging_steps = 1,\n", " optim = \"adamw_8bit\",\n", " weight_decay = 0.01,\n", " lr_scheduler_type = \"linear\",\n", " seed = 3407,\n", " output_dir = \"outputs\",\n", " report_to = \"none\", # Use this for WandB etc\n", " ),\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "colab": { "base_uri": "https://localhost:8080/" }, "id": "2ejIt2xSNKKp", "outputId": "d558ea2b-76a1-46ba-b01a-3206deae32f4" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "GPU = Tesla T4. Max memory = 14.748 GB.\n", "6.576 GB of memory reserved.\n" ] } ], "source": [ "#@title Show current memory stats\n", "gpu_stats = torch.cuda.get_device_properties(0)\n", "start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n", "max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3)\n", "print(f\"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.\")\n", "print(f\"{start_gpu_memory} GB of memory reserved.\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "id": "yqxqAZ7KJ4oL", "outputId": "7a7a385a-b90b-4e69-83b7-0f19d2f978ee" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "**** Unsloth: Please use our fixed gradient_accumulation_steps by updating transformers, TRL and Unsloth!\n", "`pip install --upgrade --no-cache-dir unsloth git+https://github.com/huggingface/transformers.git git+https://github.com/huggingface/trl.git`\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1\n", " \\\\ /| Num examples = 172,026 | Num Epochs = 1\n", "O^O/ \\_/ \\ Batch size per device = 1 | Gradient Accumulation steps = 4\n", "\\ / Total batch size = 4 | Total steps = 200\n", " \"-____-\" Number of trainable parameters = 54,018,048\n" ] }, { "data": { "text/html": [ "\n", "Step | \n", "Training Loss | \n", "
---|---|
1 | \n", "1.701200 | \n", "
2 | \n", "1.496500 | \n", "
3 | \n", "1.836600 | \n", "
4 | \n", "1.300100 | \n", "
5 | \n", "1.385700 | \n", "
6 | \n", "1.406400 | \n", "
7 | \n", "1.361900 | \n", "
8 | \n", "1.255600 | \n", "
9 | \n", "1.073400 | \n", "
10 | \n", "1.055400 | \n", "
11 | \n", "0.924000 | \n", "
12 | \n", "0.660100 | \n", "
13 | \n", "1.054300 | \n", "
14 | \n", "0.642100 | \n", "
15 | \n", "1.163400 | \n", "
16 | \n", "1.049700 | \n", "
17 | \n", "1.200700 | \n", "
18 | \n", "0.638300 | \n", "
19 | \n", "0.920000 | \n", "
20 | \n", "0.508700 | \n", "
21 | \n", "1.129800 | \n", "
22 | \n", "0.805900 | \n", "
23 | \n", "0.588500 | \n", "
24 | \n", "0.876600 | \n", "
25 | \n", "0.920100 | \n", "
26 | \n", "1.080800 | \n", "
27 | \n", "1.081600 | \n", "
28 | \n", "0.944700 | \n", "
29 | \n", "0.940600 | \n", "
30 | \n", "0.942900 | \n", "
31 | \n", "0.718600 | \n", "
32 | \n", "0.577500 | \n", "
33 | \n", "0.764700 | \n", "
34 | \n", "1.111100 | \n", "
35 | \n", "1.084600 | \n", "
36 | \n", "0.978700 | \n", "
37 | \n", "0.765000 | \n", "
38 | \n", "0.895000 | \n", "
39 | \n", "0.792800 | \n", "
40 | \n", "0.727800 | \n", "
41 | \n", "0.849400 | \n", "
42 | \n", "0.775200 | \n", "
43 | \n", "0.710300 | \n", "
44 | \n", "1.014700 | \n", "
45 | \n", "1.042400 | \n", "
46 | \n", "1.225500 | \n", "
47 | \n", "0.571200 | \n", "
48 | \n", "1.098000 | \n", "
49 | \n", "0.872600 | \n", "
50 | \n", "0.741700 | \n", "
51 | \n", "0.979600 | \n", "
52 | \n", "0.999200 | \n", "
53 | \n", "0.556200 | \n", "
54 | \n", "0.660700 | \n", "
55 | \n", "0.784900 | \n", "
56 | \n", "0.940400 | \n", "
57 | \n", "0.701900 | \n", "
58 | \n", "0.968700 | \n", "
59 | \n", "0.682900 | \n", "
60 | \n", "0.840300 | \n", "
61 | \n", "0.526800 | \n", "
62 | \n", "0.961600 | \n", "
63 | \n", "0.754700 | \n", "
64 | \n", "1.092100 | \n", "
65 | \n", "0.929000 | \n", "
66 | \n", "0.804800 | \n", "
67 | \n", "1.272900 | \n", "
68 | \n", "1.062800 | \n", "
69 | \n", "1.383400 | \n", "
70 | \n", "1.233700 | \n", "
71 | \n", "1.016000 | \n", "
72 | \n", "0.744300 | \n", "
73 | \n", "0.800700 | \n", "
74 | \n", "1.008500 | \n", "
75 | \n", "0.906300 | \n", "
76 | \n", "0.766700 | \n", "
77 | \n", "1.090200 | \n", "
78 | \n", "0.807400 | \n", "
79 | \n", "0.550700 | \n", "
80 | \n", "0.553800 | \n", "
81 | \n", "0.999900 | \n", "
82 | \n", "1.292100 | \n", "
83 | \n", "1.061900 | \n", "
84 | \n", "1.047400 | \n", "
85 | \n", "0.734200 | \n", "
86 | \n", "0.391800 | \n", "
87 | \n", "0.702700 | \n", "
88 | \n", "0.687700 | \n", "
89 | \n", "0.822200 | \n", "
90 | \n", "0.705000 | \n", "
91 | \n", "0.763900 | \n", "
92 | \n", "0.236300 | \n", "
93 | \n", "0.749500 | \n", "
94 | \n", "0.445200 | \n", "
95 | \n", "0.500800 | \n", "
96 | \n", "0.877400 | \n", "
97 | \n", "0.884400 | \n", "
98 | \n", "0.887000 | \n", "
99 | \n", "0.889900 | \n", "
100 | \n", "0.895900 | \n", "
101 | \n", "1.042100 | \n", "
102 | \n", "1.052900 | \n", "
103 | \n", "0.953700 | \n", "
104 | \n", "0.752700 | \n", "
105 | \n", "0.921000 | \n", "
106 | \n", "0.897100 | \n", "
107 | \n", "0.784500 | \n", "
108 | \n", "0.712600 | \n", "
109 | \n", "0.716700 | \n", "
110 | \n", "1.199900 | \n", "
111 | \n", "0.844600 | \n", "
112 | \n", "0.810800 | \n", "
113 | \n", "0.704900 | \n", "
114 | \n", "1.119300 | \n", "
115 | \n", "0.408600 | \n", "
116 | \n", "0.431300 | \n", "
117 | \n", "1.093200 | \n", "
118 | \n", "0.649600 | \n", "
119 | \n", "0.685300 | \n", "
120 | \n", "1.326500 | \n", "
121 | \n", "0.722300 | \n", "
122 | \n", "0.580700 | \n", "
123 | \n", "0.890100 | \n", "
124 | \n", "0.722200 | \n", "
125 | \n", "0.901900 | \n", "
126 | \n", "0.383200 | \n", "
127 | \n", "0.765700 | \n", "
128 | \n", "1.099800 | \n", "
129 | \n", "1.230900 | \n", "
130 | \n", "1.045700 | \n", "
131 | \n", "0.643400 | \n", "
132 | \n", "1.044200 | \n", "
133 | \n", "0.984500 | \n", "
134 | \n", "1.070600 | \n", "
135 | \n", "1.073700 | \n", "
136 | \n", "0.388500 | \n", "
137 | \n", "0.962500 | \n", "
138 | \n", "1.048300 | \n", "
139 | \n", "0.661400 | \n", "
140 | \n", "0.906000 | \n", "
141 | \n", "0.725700 | \n", "
142 | \n", "0.888300 | \n", "
143 | \n", "0.254600 | \n", "
144 | \n", "0.824500 | \n", "
145 | \n", "0.814300 | \n", "
146 | \n", "0.965900 | \n", "
147 | \n", "0.719700 | \n", "
148 | \n", "1.137200 | \n", "
149 | \n", "0.745100 | \n", "
150 | \n", "0.972400 | \n", "
151 | \n", "0.530900 | \n", "
152 | \n", "0.816800 | \n", "
153 | \n", "0.740300 | \n", "
154 | \n", "0.808000 | \n", "
155 | \n", "1.164000 | \n", "
156 | \n", "0.523100 | \n", "
157 | \n", "1.065800 | \n", "
158 | \n", "1.191600 | \n", "
159 | \n", "0.865600 | \n", "
160 | \n", "0.839400 | \n", "
161 | \n", "0.975000 | \n", "
162 | \n", "0.614300 | \n", "
163 | \n", "1.052100 | \n", "
164 | \n", "0.889800 | \n", "
165 | \n", "0.402000 | \n", "
166 | \n", "0.633400 | \n", "
167 | \n", "0.800300 | \n", "
168 | \n", "0.973800 | \n", "
169 | \n", "0.466100 | \n", "
170 | \n", "0.877100 | \n", "
171 | \n", "0.752700 | \n", "
172 | \n", "1.166300 | \n", "
173 | \n", "0.919500 | \n", "
174 | \n", "0.701400 | \n", "
175 | \n", "0.902800 | \n", "
176 | \n", "0.895900 | \n", "
177 | \n", "0.808900 | \n", "
178 | \n", "0.631700 | \n", "
179 | \n", "0.588300 | \n", "
180 | \n", "0.901700 | \n", "
181 | \n", "1.015800 | \n", "
182 | \n", "0.893900 | \n", "
183 | \n", "0.726100 | \n", "
184 | \n", "0.814900 | \n", "
185 | \n", "0.589000 | \n", "
186 | \n", "0.728600 | \n", "
187 | \n", "0.884300 | \n", "
188 | \n", "0.791000 | \n", "
189 | \n", "0.917300 | \n", "
190 | \n", "0.954500 | \n", "
191 | \n", "1.196100 | \n", "
192 | \n", "0.870400 | \n", "
193 | \n", "0.949800 | \n", "
194 | \n", "0.982200 | \n", "
195 | \n", "0.965000 | \n", "
196 | \n", "1.317000 | \n", "
197 | \n", "0.497100 | \n", "
198 | \n", "0.655100 | \n", "
199 | \n", "1.060100 | \n", "
200 | \n", "0.994400 | \n", "
"
],
"text/plain": [
"