finomhangolás folyamatban (steps: 868/1000, ami kb 3 epoch)

magyar nyelvű instrukciókövető modell, lora finomhangolva a szurkemarha korpuszon (több magyar korpusz egyesítése)

használt repo: https://github.com/OpenMOSE/RWKV5-LM-LoRA

inferencehez én ezt használom: https://github.com/RWKV/rwkv.cpp

finomhangolási opciók (gyakorlatilag a default):

{'load_model': 'model/RWKV-5-World-3B-v2-20231113-ctx4096.pth', 'wandb': '', 'proj_dir': 'output', 'random_seed': -1, 'data_file': 'data', 'data_type': 'binidx', 'vocab_size': 65536, 'ctx_len': 4096, 'epoch_steps': 200, 'epoch_count': 1000, 'epoch_begin': 0, 'epoch_save': 2, 'micro_bsz': 1, 'n_layer': 32, 'n_embd': 2560, 'dim_att': 2560, 'dim_ffn': 8960, 'pre_ffn': 0, 'head_qk': 0, 'tiny_att_dim': 0, 'tiny_att_layer': -999, 'lr_init': 0.0001, 'lr_final': 9e-06, 'warmup_steps': 200, 'beta1': 0.9, 'beta2': 0.999, 'adam_eps': 1e-08, 'grad_cp': 1, 'dropout': 0, 'weight_decay': 0, 'weight_decay_final': -1, 'my_pile_version': 1, 'my_pile_stage': 0, 'my_pile_shift': -1, 'my_pile_edecay': 0, 'layerwise_lr': 1, 'ds_bucket_mb': 200, 'my_sample_len': 0, 'my_ffn_shift': 1, 'my_att_shift': 1, 'head_size_a': 64, 'head_size_divisor': 8, 'my_pos_emb': 0, 'load_partial': 0, 'magic_prime': 0, 'my_qa_mask': 0, 'my_random_steps': 0, 'my_testing': 'r2r3r4', 'my_exit': 99999999, 'my_exit_tokens': 0, 'lora': True, 'lora_load': '', 'lora_r': 8, 'lora_alpha': 16.0, 'lora_dropout': 0.01, 'lora_parts': 'att,ffn,time,ln', 'logger': False, 'enable_checkpointing': False, 'default_root_dir': None, 'gradient_clip_val': 1.0, 'gradient_clip_algorithm': None, 'num_nodes': 1, 'num_processes': None, 'devices': '1', 'gpus': None, 'auto_select_gpus': False, 'tpu_cores': None, 'ipus': None, 'enable_progress_bar': True, 'overfit_batches': 0.0, 'track_grad_norm': -1, 'check_val_every_n_epoch': 100000000000000000000, 'fast_dev_run': False, 'accumulate_grad_batches': None, 'max_epochs': -1, 'min_epochs': None, 'max_steps': -1, 'min_steps': None, 'max_time': None, 'limit_train_batches': None, 'limit_val_batches': None, 'limit_test_batches': None, 'limit_predict_batches': None, 'val_check_interval': None, 'log_every_n_steps': 100000000000000000000, 'accelerator': 'gpu', 'strategy': 'deepspeed_stage_2_offload', 'sync_batchnorm': False, 'precision': 'bf16', 'enable_model_summary': True, 'weights_save_path': None, 'num_sanity_val_steps': 0, 'resume_from_checkpoint': None, 'profiler': None, 'benchmark': None, 'deterministic': None, 'reload_dataloaders_every_n_epochs': 0, 'auto_lr_find': False, 'replace_sampler_ddp': False, 'detect_anomaly': False, 'auto_scale_batch_size': False, 'plugins': None, 'amp_backend': 'native', 'amp_level': None, 'move_metrics_to_cpu': False, 'multiple_trainloader_mode': 'max_size_cycle', 'my_timestamp': '2024-03-25-21-56-16', 'betas': (0.9, 0.999), 'real_bsz': 1, 'run_name': '65536 ctx4096 L32 D2560'}

lásd a train_log filet több infoért

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train boapps/szurkemarha-rwkv5-3B