sunnychenxiwang
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,14 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
-
HalDet-LLaVA
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
## HalDet-LLaVA
|
5 |
+
|
6 |
+
HalDet-LLaVA is designed for multimodal hallucination detection, trained on the MHaluBench training dataset, achieving detection performance close to that of using GPT4-Vision.
|
7 |
+
|
8 |
+
HalDet-LLaVA is trained on the [MHaluBench training set](https://huggingface.co/datasets/openkg/MHaluBench/blob/main/MHaluBench_train.json) using LLaVA-v1.5, specific parameters can be found in the file [finetune_task_lora.sh](https://github.com/zjunlp/EasyDetect/blob/main/HalDet-LLaVA/finetune_task_lora.sh).
|
9 |
+
|
10 |
+
We trained HalDet-LLaVA on 1-A800 in 1 hour. If you don"t have enough GPU resources, we will soon provide model distributed training scripts.
|
11 |
+
|
12 |
+
You can inference our HalDet-LLaVA by using [inference.py](https://github.com/zjunlp/EasyDetect/blob/main/HalDet-LLaVA/inference.py)
|
13 |
+
|
14 |
+
To view more detailed information about HalDet-LLaVA and the train dataset, please refer to the [EasyDetect](https://github.com/zjunlp/EasyDetect) and [readme](https://github.com/zjunlp/EasyDetect/blob/main/HalDet-LLaVA/README.md)
|