Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -172,55 +172,13 @@ MODEL_TYPE='qwen2_vl'
|
|
172 |
MODEL=AdaptLLM/food-Qwen2-VL-2B-Instruct
|
173 |
|
174 |
# Set the directory for saving model prediction outputs:
|
175 |
-
OUTPUT_DIR=./output/AdaMLLM-food-LLaVA-8B_${DOMAIN}
|
176 |
-
|
177 |
-
# Run inference with data parallelism; adjust CUDA devices as needed:
|
178 |
-
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
179 |
-
```
|
180 |
-
|
181 |
-
Detailed scripts to reproduce our results:
|
182 |
-
|
183 |
-
<details>
|
184 |
-
<summary> Click to expand </summary>
|
185 |
-
|
186 |
-
```bash
|
187 |
-
# Choose from ['food', 'Recipe1M', 'Nutrition5K', 'Food101', 'FoodSeg103']
|
188 |
-
# 'food' runs inference on all food tasks; others run on a single task
|
189 |
-
DOMAIN='food'
|
190 |
-
|
191 |
-
# 1. LLaVA-v1.6-8B
|
192 |
-
MODEL_TYPE='llava'
|
193 |
-
MODEL=AdaptLLM/food-LLaVA-NeXT-Llama3-8B # HuggingFace repo ID for AdaMLLM-food-8B
|
194 |
-
OUTPUT_DIR=./output/AdaMLLM-food-LLaVA-8B_${DOMAIN}
|
195 |
-
|
196 |
-
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
197 |
-
|
198 |
-
# 2. Qwen2-VL-2B
|
199 |
-
MODEL_TYPE='qwen2_vl'
|
200 |
-
MODEL=Qwen/Qwen2-VL-2B-Instruct # HuggingFace repo ID for Qwen2-VL
|
201 |
-
OUTPUT_DIR=./output/Qwen2-VL-2B-Instruct_${DOMAIN}
|
202 |
-
|
203 |
-
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
204 |
-
|
205 |
-
MODEL=AdaptLLM/food-Qwen2-VL-2B-Instruct # HuggingFace repo ID for AdaMLLM-food-2B
|
206 |
OUTPUT_DIR=./output/AdaMLLM-food-Qwen-2B_${DOMAIN}
|
207 |
|
208 |
-
|
209 |
-
|
210 |
-
# 3. Llama-3.2-11B
|
211 |
-
MODEL_TYPE='mllama'
|
212 |
-
MODEL=meta-llama/Llama-3.2-11B-Vision-Instruct # HuggingFace repo ID for Llama3.2
|
213 |
-
OUTPUT_DIR=./output/Llama-3.2-11B-Vision-Instruct_${DOMAIN}
|
214 |
-
|
215 |
-
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
216 |
-
|
217 |
-
MODEL=AdaptLLM/food-Llama-3.2-11B-Vision-Instruct # HuggingFace repo ID for AdaMLLM-food-11B
|
218 |
-
OUTPUT_DIR=./output/AdaMLLM-food-Llama3.2-2B_${DOMAIN}
|
219 |
-
|
220 |
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
221 |
```
|
222 |
-
</details>
|
223 |
|
|
|
224 |
|
225 |
### 3) Results
|
226 |
The evaluation results are stored in `./eval_results`, and the model prediction outputs are in `./output`.
|
|
|
172 |
MODEL=AdaptLLM/food-Qwen2-VL-2B-Instruct
|
173 |
|
174 |
# Set the directory for saving model prediction outputs:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
175 |
OUTPUT_DIR=./output/AdaMLLM-food-Qwen-2B_${DOMAIN}
|
176 |
|
177 |
+
# Run inference with data parallelism; adjust CUDA devices as needed:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
178 |
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR}
|
179 |
```
|
|
|
180 |
|
181 |
+
Detailed scripts to reproduce our results are in [Evaluation.md](https://github.com/bigai-ai/QA-Synthesizer/blob/main/docs/Evaluation.md)
|
182 |
|
183 |
### 3) Results
|
184 |
The evaluation results are stored in `./eval_results`, and the model prediction outputs are in `./output`.
|