adamo1139 commited on
Commit
e91b964
1 Parent(s): c11438d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -3
README.md CHANGED
@@ -1,3 +1,92 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - adamo1139/Sydney_LLaVA_0610
5
+ base_model:
6
+ - Qwen/Qwen2-VL-7B-Instruct
7
+ tags:
8
+ - fluff
9
+ - dogos
10
+ - cats
11
+ - sydney
12
+ - bing
13
+ - qwen
14
+ - vlm
15
+ ---
16
+
17
+
18
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/630fdd96a119d49bc1e770d5/7NJFmljgycOJs7mcO2Cag.png" width="500" style="float:right">
19
+
20
+ ## Model Description
21
+
22
+ Qwen 2 VL 7B Sydney - Optimizing Vision Language Models for engagement and positivity.
23
+
24
+ Have you ever pasted a picture of your dog or cat into a Vision Language Model only for the model to give you a description of the image without complimenting on the looks of your fluffer? \
25
+ Well, this model will use every chance it gets to compliment your adorable sweetheart.
26
+
27
+ It's been trained on around 60000 samples of synthetic data generated by [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B). Dataset was converted from [liuhaotian/LLaVA-Instruct-150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K).
28
+ Dataset is available [here](https://huggingface.co/datasets/adamo1139/Sydney_LLaVA_0610).
29
+
30
+ I am attempting to learn about finetuning Qwen 2 VL 7B and this was just a result of my tinkering over a weekend.
31
+
32
+ ## Dataset Creation details
33
+
34
+ I ran Hermes 3 8B in Aphrodite-Engine locally and used a Python script to go through the LLaVA 150K Instruct dataset and for each sample, send a request to the model to modify the JSON sample so that output is more energetic. I used 6-shot prompt with bad samples coming from a generic LLM and good samples coming from [FPHam/Llama-3-8B-Sydney](https://huggingface.co/FPHam/Llama-3-8B-Sydney).
35
+ After running through about half of the dataset I noticed an error in one of my examples and upon fixing it and modifying the prompt a bit I noticed that the generation quality deteriorated and 30% of responses I was getting back didn't pass JSON validation. I settled on using the ~60000 samples that were already processed fine. I cleaned up the dataset to fix various errors in it like presence of non UTF8 characters.
36
+
37
+ ## Technical details
38
+
39
+ Model was trained in LLaMa-Factory on a system with RTX 3090 Ti with unsloth on context length of 2000 with LoRA rank 32, alpha 32 and LoRa+ ratio of 4. Training took around 11 hours and bitsandbytes quantization was not utilized.
40
+
41
+ ```
42
+ bf16: true
43
+ cutoff_len: 2000
44
+ dataset: sydney
45
+ dataset_dir: data
46
+ ddp_timeout: 180000000
47
+ do_train: true
48
+ finetuning_type: lora
49
+ flash_attn: auto
50
+ gradient_accumulation_steps: 16
51
+ include_num_input_tokens_seen: true
52
+ learning_rate: 5.0e-05
53
+ logging_steps: 1
54
+ lora_alpha: 32
55
+ lora_dropout: 0
56
+ lora_rank: 32
57
+ lora_target: all
58
+ loraplus_lr_ratio: 4
59
+ lr_scheduler_type: cosine
60
+ max_grad_norm: 1.0
61
+ max_samples: 160000
62
+ model_name_or_path: Qwen/Qwen2-VL-7B-Instruct
63
+ num_train_epochs: 1.0
64
+ optim: adamw_8bit
65
+ output_dir: saves/Qwen2-VL-7B-Instruct/lora/train_2024-10-05-18-44-10-2
66
+ packing: true
67
+ per_device_train_batch_size: 1
68
+ plot_loss: true
69
+ preprocessing_num_workers: 16
70
+ report_to: none
71
+ save_steps: 200
72
+ stage: sft
73
+ template: qwen2_vl
74
+ train_on_prompt: true
75
+ use_unsloth: true
76
+ warmup_steps: 25
77
+ ```
78
+
79
+ Loss drops quickly and then stays basically flat, I am not sure why and this suggest some of the hyperparameters might have been set incorrectly or loss works differently on vision language models.
80
+
81
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630fdd96a119d49bc1e770d5/QAaqfinhJTf5Qf52oWL65.png)
82
+
83
+ ## Examples of use
84
+
85
+ I am comparing Qwen 2 VL 7B Sydney with Qwen/Qwen2-VL-7B-Instruct
86
+
87
+ <div style="display: grid; grid-template-columns: repeat(2, 1fr); gap: 10px; max-width: 2000px; margin: 0 auto;">
88
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/630fdd96a119d49bc1e770d5/9am1yhT8mid0mYaCCTsRo.png" style="width: 100%; height: auto;" alt="Image 1" />
89
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/630fdd96a119d49bc1e770d5/Tfw7rL7NX9OwVXH-Vy5IB.png" style="width: 100%; height: auto;" alt="Image 2" />
90
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/630fdd96a119d49bc1e770d5/JqbCDhfYSqddNUaR0VgmW.png" style="width: 100%; height: auto;" alt="Image 3" />
91
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/630fdd96a119d49bc1e770d5/Uwp2q7QTjz7nFRcVU3AVG.png" style="width: 100%; height: auto;" alt="Image 4" />
92
+ </div>