ZinengTang commited on
Commit
5df033a
·
verified ·
1 Parent(s): 7fa86d4

Upload LLaVA-LoRA model

Browse files
Files changed (3) hide show
  1. README.md +34 -0
  2. adapter_config.json +13 -0
  3. adapter_model.safetensors +3 -0
README.md ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # LLaVA-LoRA Adapter
3
+
4
+ This is a LoRA adapter for the LLaVA model, fine-tuned for spatial description tasks.
5
+
6
+ ## Base Model
7
+ This adapter is trained on top of [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf).
8
+
9
+ ## Training
10
+ The model was fine-tuned using LoRA with the following configuration:
11
+ - Rank: 8
12
+ - Alpha: 32
13
+ - Target modules: q_proj, v_proj, k_proj
14
+ - Dataset: PersReFex validation set
15
+
16
+ ## Usage
17
+
18
+ ```python
19
+ from peft import PeftModel
20
+ from transformers import AutoProcessor, LlavaForConditionalGeneration
21
+
22
+ # Load base model
23
+ base_model = LlavaForConditionalGeneration.from_pretrained(
24
+ "llava-hf/llava-1.5-7b-hf",
25
+ torch_dtype=torch.bfloat16
26
+ )
27
+ processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf")
28
+
29
+ # Load LoRA adapter
30
+ model = PeftModel.from_pretrained(
31
+ base_model,
32
+ "ZinengTang/llava-lora-spatial"
33
+ )
34
+ ```
adapter_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "base_model_name_or_path": "llava-hf/llava-1.5-7b-hf",
3
+ "task_type": "CAUSAL_LM",
4
+ "inference_mode": false,
5
+ "r": 8,
6
+ "lora_alpha": 32,
7
+ "lora_dropout": 0.1,
8
+ "target_modules": [
9
+ "q_proj",
10
+ "v_proj",
11
+ "k_proj"
12
+ ]
13
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20c637b9b990a03c8c1b93152fb6bdcd6796cd6bd44628521b0e663708b59ded
3
+ size 29936104