weizhiwang commited on
Commit
dcc8dca
·
verified ·
1 Parent(s): 4a03fda

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -12,7 +12,7 @@ language:
12
  <!-- Provide a quick summary of what the model is/does. -->
13
 
14
  A reproduced LLaVA LVLM based on Llama-3-8B LLM backbone. Not an official implementation.
15
- Please follow my reproduced implementation [LLaVA-Llama-3](https://github.com/Victorwz/LLaVA-Llama-3/) for more details on fine-tuning LLaVA model with Llama-3 as the foundatiaon LLM.
16
 
17
  ## Updates
18
  - [5/14/2024] The codebase has been upgraded to llava-next (llava-v1.6). Now it supports the latest llama-3, phi-3, mistral-v0.1-7b models.
@@ -24,7 +24,7 @@ Follows LLavA-1.5 pre-train and supervised fine-tuning pipeline. You do not need
24
 
25
  Please firstly install llava via
26
  ```
27
- pip install git+https://github.com/Victorwz/LLaVA-Llama-3.git
28
  ```
29
 
30
  You can load the model and perform inference as follows:
@@ -76,7 +76,7 @@ In the background, there are two cars parked on the street, one on the left side
76
  ```
77
 
78
  # Fine-Tune LLaVA-Llama-3 on Your Visual Instruction Data
79
- Please refer to a forked [LLaVA-Llama-3](https://github.com/Victorwz/LLaVA-Llama-3) git repo for fine-tuning data preparation and scripts. The data loading function and fastchat conversation template are changed due to a different tokenizer.
80
 
81
  ## Benchmark Results
82
 
 
12
  <!-- Provide a quick summary of what the model is/does. -->
13
 
14
  A reproduced LLaVA LVLM based on Llama-3-8B LLM backbone. Not an official implementation.
15
+ Please follow my reproduced implementation [LLaVA-Llama-Video-3](https://github.com/Victorwz/LLaVA-Video-Llama-3/) for more details on fine-tuning LLaVA model with Llama-3 as the foundatiaon LLM.
16
 
17
  ## Updates
18
  - [5/14/2024] The codebase has been upgraded to llava-next (llava-v1.6). Now it supports the latest llama-3, phi-3, mistral-v0.1-7b models.
 
24
 
25
  Please firstly install llava via
26
  ```
27
+ pip install git+https://github.com/Victorwz/LLaVA-Video-Llama-3.git
28
  ```
29
 
30
  You can load the model and perform inference as follows:
 
76
  ```
77
 
78
  # Fine-Tune LLaVA-Llama-3 on Your Visual Instruction Data
79
+ Please refer to our [LLaVA-Video-Llama-3](https://github.com/Victorwz/LLaVA-Video-Llama-3) git repo for fine-tuning data preparation and scripts. The data loading function and fastchat conversation template are changed due to a different tokenizer.
80
 
81
  ## Benchmark Results
82