|  | --- | 
					
						
						|  | library_name: peft | 
					
						
						|  | license: apache-2.0 | 
					
						
						|  | base_model: unsloth/SmolLM2-1.7B | 
					
						
						|  | tags: | 
					
						
						|  | - axolotl | 
					
						
						|  | - generated_from_trainer | 
					
						
						|  | model-index: | 
					
						
						|  | - name: dbc8068c-c0df-477e-b01e-54d0253b084c | 
					
						
						|  | results: [] | 
					
						
						|  | --- | 
					
						
						|  |  | 
					
						
						|  | <!-- This model card has been generated automatically according to the information the Trainer had access to. You | 
					
						
						|  | should probably proofread and complete it, then remove this comment. --> | 
					
						
						|  |  | 
					
						
						|  | [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) | 
					
						
						|  | <details><summary>See axolotl config</summary> | 
					
						
						|  |  | 
					
						
						|  | axolotl version: `0.4.1` | 
					
						
						|  | ```yaml | 
					
						
						|  | adapter: lora | 
					
						
						|  | base_model: unsloth/SmolLM2-1.7B | 
					
						
						|  | bf16: auto | 
					
						
						|  | chat_template: llama3 | 
					
						
						|  | cosine_min_lr_ratio: 0.1 | 
					
						
						|  | data_processes: 4 | 
					
						
						|  | dataset_prepared_path: null | 
					
						
						|  | datasets: | 
					
						
						|  | - data_files: | 
					
						
						|  | - 4e8bbccf7ba30338_train_data.json | 
					
						
						|  | ds_type: json | 
					
						
						|  | format: custom | 
					
						
						|  | num_proc: 4 | 
					
						
						|  | path: /workspace/input_data/4e8bbccf7ba30338_train_data.json | 
					
						
						|  | streaming: true | 
					
						
						|  | type: | 
					
						
						|  | field_input: Article | 
					
						
						|  | field_instruction: Summary | 
					
						
						|  | field_output: Headline | 
					
						
						|  | format: '{instruction} {input}' | 
					
						
						|  | no_input_format: '{instruction}' | 
					
						
						|  | system_format: '{system}' | 
					
						
						|  | system_prompt: '' | 
					
						
						|  | debug: null | 
					
						
						|  | deepspeed: null | 
					
						
						|  | device_map: | 
					
						
						|  | lm_head: 3 | 
					
						
						|  | model.embed_tokens: 0 | 
					
						
						|  | model.layers.0: 0 | 
					
						
						|  | model.layers.1: 0 | 
					
						
						|  | model.layers.10: 3 | 
					
						
						|  | model.layers.11: 3 | 
					
						
						|  | model.layers.2: 0 | 
					
						
						|  | model.layers.3: 1 | 
					
						
						|  | model.layers.4: 1 | 
					
						
						|  | model.layers.5: 1 | 
					
						
						|  | model.layers.6: 2 | 
					
						
						|  | model.layers.7: 2 | 
					
						
						|  | model.layers.8: 2 | 
					
						
						|  | model.layers.9: 3 | 
					
						
						|  | model.norm: 3 | 
					
						
						|  | do_eval: true | 
					
						
						|  | early_stopping_patience: 1 | 
					
						
						|  | eval_batch_size: 1 | 
					
						
						|  | eval_sample_packing: false | 
					
						
						|  | eval_steps: 25 | 
					
						
						|  | evaluation_strategy: steps | 
					
						
						|  | flash_attention: false | 
					
						
						|  | fp16: null | 
					
						
						|  | fsdp: null | 
					
						
						|  | fsdp_config: null | 
					
						
						|  | gradient_accumulation_steps: 32 | 
					
						
						|  | gradient_checkpointing: true | 
					
						
						|  | group_by_length: true | 
					
						
						|  | hub_model_id: eeeebbb2/dbc8068c-c0df-477e-b01e-54d0253b084c | 
					
						
						|  | hub_strategy: checkpoint | 
					
						
						|  | hub_token: null | 
					
						
						|  | learning_rate: 0.0001 | 
					
						
						|  | load_in_4bit: false | 
					
						
						|  | load_in_8bit: false | 
					
						
						|  | local_rank: null | 
					
						
						|  | logging_steps: 1 | 
					
						
						|  | lora_alpha: 64 | 
					
						
						|  | lora_dropout: 0.05 | 
					
						
						|  | lora_fan_in_fan_out: null | 
					
						
						|  | lora_model_dir: null | 
					
						
						|  | lora_r: 32 | 
					
						
						|  | lora_target_linear: true | 
					
						
						|  | lora_target_modules: | 
					
						
						|  | - q_proj | 
					
						
						|  | - v_proj | 
					
						
						|  | lr_scheduler: cosine | 
					
						
						|  | max_grad_norm: 0.3 | 
					
						
						|  | max_memory: | 
					
						
						|  | 0: 60GB | 
					
						
						|  | 1: 70GB | 
					
						
						|  | 2: 70GB | 
					
						
						|  | 3: 70GB | 
					
						
						|  | cpu: 96GB | 
					
						
						|  | max_steps: 50 | 
					
						
						|  | micro_batch_size: 1 | 
					
						
						|  | mixed_precision: bf16 | 
					
						
						|  | mlflow_experiment_name: /tmp/4e8bbccf7ba30338_train_data.json | 
					
						
						|  | model_type: AutoModelForCausalLM | 
					
						
						|  | num_epochs: 3 | 
					
						
						|  | optim_args: | 
					
						
						|  | adam_beta1: 0.9 | 
					
						
						|  | adam_beta2: 0.95 | 
					
						
						|  | adam_epsilon: 1e-5 | 
					
						
						|  | optimizer: adamw_torch | 
					
						
						|  | output_dir: miner_id_24 | 
					
						
						|  | pad_to_sequence_len: true | 
					
						
						|  | resume_from_checkpoint: null | 
					
						
						|  | s2_attention: null | 
					
						
						|  | sample_packing: false | 
					
						
						|  | save_steps: 25 | 
					
						
						|  | save_strategy: steps | 
					
						
						|  | sequence_len: 2048 | 
					
						
						|  | strict: false | 
					
						
						|  | tf32: false | 
					
						
						|  | tokenizer_type: AutoTokenizer | 
					
						
						|  | torch_compile: false | 
					
						
						|  | torch_dtype: bfloat16 | 
					
						
						|  | train_on_inputs: false | 
					
						
						|  | trust_remote_code: true | 
					
						
						|  | use_cache: false | 
					
						
						|  | val_set_size: 50 | 
					
						
						|  | wandb_entity: null | 
					
						
						|  | wandb_mode: online | 
					
						
						|  | wandb_name: dbc8068c-c0df-477e-b01e-54d0253b084c | 
					
						
						|  | wandb_project: Public_TuningSN | 
					
						
						|  | wandb_runid: dbc8068c-c0df-477e-b01e-54d0253b084c | 
					
						
						|  | warmup_ratio: 0.05 | 
					
						
						|  | weight_decay: 0.01 | 
					
						
						|  | xformers_attention: null | 
					
						
						|  |  | 
					
						
						|  | ``` | 
					
						
						|  |  | 
					
						
						|  | </details><br> | 
					
						
						|  |  | 
					
						
						|  | # dbc8068c-c0df-477e-b01e-54d0253b084c | 
					
						
						|  |  | 
					
						
						|  | This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset. | 
					
						
						|  | It achieves the following results on the evaluation set: | 
					
						
						|  | - Loss: nan | 
					
						
						|  |  | 
					
						
						|  | ## Model description | 
					
						
						|  |  | 
					
						
						|  | More information needed | 
					
						
						|  |  | 
					
						
						|  | ## Intended uses & limitations | 
					
						
						|  |  | 
					
						
						|  | More information needed | 
					
						
						|  |  | 
					
						
						|  | ## Training and evaluation data | 
					
						
						|  |  | 
					
						
						|  | More information needed | 
					
						
						|  |  | 
					
						
						|  | ## Training procedure | 
					
						
						|  |  | 
					
						
						|  | ### Training hyperparameters | 
					
						
						|  |  | 
					
						
						|  | The following hyperparameters were used during training: | 
					
						
						|  | - learning_rate: 0.0001 | 
					
						
						|  | - train_batch_size: 1 | 
					
						
						|  | - eval_batch_size: 1 | 
					
						
						|  | - seed: 42 | 
					
						
						|  | - distributed_type: multi-GPU | 
					
						
						|  | - num_devices: 4 | 
					
						
						|  | - gradient_accumulation_steps: 32 | 
					
						
						|  | - total_train_batch_size: 128 | 
					
						
						|  | - total_eval_batch_size: 4 | 
					
						
						|  | - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5 | 
					
						
						|  | - lr_scheduler_type: cosine | 
					
						
						|  | - lr_scheduler_warmup_steps: 2 | 
					
						
						|  | - training_steps: 50 | 
					
						
						|  |  | 
					
						
						|  | ### Training results | 
					
						
						|  |  | 
					
						
						|  | | Training Loss | Epoch  | Step | Validation Loss | | 
					
						
						|  | |:-------------:|:------:|:----:|:---------------:| | 
					
						
						|  | | 103461800.0   | 0.0012 | 1    | nan             | | 
					
						
						|  | | 0.0           | 0.0306 | 25   | nan             | | 
					
						
						|  | | 0.0           | 0.0611 | 50   | nan             | | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | ### Framework versions | 
					
						
						|  |  | 
					
						
						|  | - PEFT 0.13.2 | 
					
						
						|  | - Transformers 4.46.0 | 
					
						
						|  | - Pytorch 2.5.0+cu124 | 
					
						
						|  | - Datasets 3.0.1 | 
					
						
						|  | - Tokenizers 0.20.1 |