Image-Text-to-Text
Transformers
PyTorch
English
llava
text-generation
Inference Endpoints
SpursgoZmy commited on
Commit
7101ea4
1 Parent(s): 1ff7879

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -78,4 +78,4 @@ great performance on a wide range of table-based
78
  tasks, the resolution of input images (336*336) is relatively
79
  low and may limit the upper bound of its capacity. Luckily, with the emergence of MLLMs which
80
  possess higher input image resolution (e.g., Monkey (Li et al., 2023d), LLaVA-Next (Liu et al.,
81
- 2024)), we can use MMTab to develop more powerful tabular MLLM in the future research.
 
78
  tasks, the resolution of input images (336*336) is relatively
79
  low and may limit the upper bound of its capacity. Luckily, with the emergence of MLLMs which
80
  possess higher input image resolution (e.g., Monkey (Li et al., 2023d), LLaVA-Next (Liu et al.,
81
+ 2024)), researchers can use MMTab to develop more powerful tabular MLLM in the future research.