|
--- |
|
inference: false |
|
--- |
|
|
|
<br> |
|
<br> |
|
|
|
# MoMA Model Card |
|
|
|
## Model details |
|
|
|
**Model type:** |
|
MoMA is an open-source image personalization model. It has new attention layers and a multi-modal large language model fine-tuned from LLaVA-7B. |
|
|
|
**Paper or resources for more information:** |
|
+ Project page: https://moma-adapter.github.io/ |
|
+ Github: https://github.com/bytedance/MoMA/tree/main |
|
+ Paper: https://arxiv.org/abs/2404.05674 |
|
|
|
**Where to send questions or comments about the model:** |
|
https://github.com/bytedance/MoMA/tree/main |
|
|
|
## Intended use |
|
**Primary intended uses:** |
|
The primary use is research on personalized image generation tasks. |
|
|
|
**Primary intended users:** |
|
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
|
|