--- license: apache-2.0 library_name: peft tags: - finetuned - multimodal base_model: mistralai/Mistral-7B-Instruct-v0.1 dataset: sshh12/xclip-videoinstruct-finetune inference: false --- These are weights for a version of `mistralai/Mistral-7B-Instruct-v0.1` finetuned for multimodal applications. ### Modalities * XCLIPVideoModality (use `<video>` in text and provide `videos`, encoded as 10 tokens) ### Usage GitHub: https://github.com/sshh12/multi_token (includes training scripts and basic inference server) ### Dataset sshh12/xclip-videoinstruct-finetune (100010 examples) ``` {'videos': ['https://www.youtube.com/watch?v=k_ZXmr8pmrs'], 'messages': [{'content': 'What are the main activities that take place in the video?', 'role': 'user'}, {'content': 'The main activities that take place in the video are the preparation of camera equipment by a man, a group of men riding a helicopter, and a man sailing a boat through the water.', 'role': 'assistant'}]} ``` ### Training Device(s) ``` name, pci.bus_id, vbios_version NVIDIA RTX A6000, 00000000:82:00.0, 94.02.5C.00.02 ``` ### Model ``` MistralLMMForCausalLM.model = PeftModelForCausalLM( (base_model): LoraModel( (model): MistralLMMForCausalLM( (model): MistralLMMModel( (embed_tokens): Embedding(32000, 4096) (layers): ModuleList( (0-31): 32 x MistralDecoderLayer( (self_attn): MistralAttention( (q_proj): lora.Linear( (base_layer): Linear(in_features=4096, out_features=4096, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (k_proj): lora.Linear( (base_layer): Linear(in_features=4096, out_features=1024, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (v_proj): lora.Linear( (base_layer): Linear(in_features=4096, out_features=1024, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=1024, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (o_proj): lora.Linear( (base_layer): Linear(in_features=4096, out_features=4096, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (rotary_emb): MistralRotaryEmbedding() ) (mlp): MistralMLP( (gate_proj): lora.Linear( (base_layer): Linear(in_features=4096, out_features=14336, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=14336, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (up_proj): lora.Linear( (base_layer): Linear(in_features=4096, out_features=14336, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=4096, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=14336, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (down_proj): lora.Linear( (base_layer): Linear(in_features=14336, out_features=4096, bias=False) (lora_dropout): ModuleDict( (default): Dropout(p=0.05, inplace=False) ) (lora_A): ModuleDict( (default): Linear(in_features=14336, out_features=64, bias=False) ) (lora_B): ModuleDict( (default): Linear(in_features=64, out_features=4096, bias=False) ) (lora_embedding_A): ParameterDict() (lora_embedding_B): ParameterDict() ) (act_fn): SiLU() ) (input_layernorm): MistralRMSNorm() (post_attention_layernorm): MistralRMSNorm() ) ) (norm): MistralRMSNorm() (video_xclip_lmm_projector): _MLPVectorProjector( (mlps): ModuleList( (0-9): 10 x Sequential( (0): Linear(in_features=512, out_features=4096, bias=True) (1): GELU(approximate='none') (2): Linear(in_features=4096, out_features=4096, bias=True) ) ) ) ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) ) ) ) ``` ### Framework versions - PEFT 0.7.1