PVD-160K / README.md
mikewang's picture
Update README.md
1f3f554 verified
|
raw
history blame
No virus
1.41 kB
---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: role
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 277884785
num_examples: 160000
download_size: 126665150
dataset_size: 277884785
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
<h1 align="center"> Text-Based Reasoning About Vector Graphics </h1>
<p align="center">
<a href="https://mikewangwzhl.github.io/vdlm.github.io/">🌐 Homepage</a>
<a href="">📃 Paper</a>
<a href="https://huggingface.co/datasets/mikewang/PVD-160K" >🤗 Data (PVD-160k)</a>
<a href="https://huggingface.co/mikewang/PVD-160k-Mistral-7b" >🤗 Model (PVD-160k-Mistral-7b)</a>
<a href="https://github.com/MikeWangWZHL/VDLM" >💻 Code</a>
</p>
We propose **VDLM**, a text-based visual reasoning framework for vector graphics. VDLM operates on text-based visual descriptions—specifically, SVG representations and learned Primal Visual Descriptions (PVD), enabling zero-shot reasoning with an off-the-shelf LLM. We demonstrate that VDLM outperforms state-of-the-art large multimodal models, such as GPT-4V, across various multimodal reasoning tasks involving vector graphics. See our [paper]() for more details.
![Overview](https://github.com/MikeWangWZHL/VDLM/blob/main/figures/overview.png?raw=true)