Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,130 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- ja
|
4 |
+
tags:
|
5 |
+
- heron
|
6 |
+
- vision
|
7 |
+
- image-captioning
|
8 |
+
- VQA
|
9 |
+
pipeline_tag: image-to-text
|
10 |
+
license:
|
11 |
+
- cc-by-nc-4.0
|
12 |
+
inference: false
|
13 |
+
---
|
14 |
+
# Heron BLIP Japanese StableLM Base 7B llava-620k
|
15 |
+
|
16 |
+
|
17 |
+
## Model Details
|
18 |
+
Heron BLIP Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br>
|
19 |
+
This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details.
|
20 |
+
|
21 |
+
|
22 |
+
## Usage
|
23 |
+
|
24 |
+
Follow [the installation guide](https://github.com/turingmotors/heron/).
|
25 |
+
|
26 |
+
```python
|
27 |
+
import torch
|
28 |
+
from heron.models.video_blip import VideoBlipForConditionalGeneration, VideoBlipProcessor
|
29 |
+
from transformers import LlamaTokenizer
|
30 |
+
|
31 |
+
device_id = 0
|
32 |
+
device = f"cuda:{device_id}"
|
33 |
+
|
34 |
+
MODEL_NAME = "turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1"
|
35 |
+
|
36 |
+
model = VideoBlipForConditionalGeneration.from_pretrained(
|
37 |
+
MODEL_NAME, torch_dtype=torch.float16, ignore_mismatched_sizes=True
|
38 |
+
)
|
39 |
+
|
40 |
+
model = model.half()
|
41 |
+
model.eval()
|
42 |
+
model.to(device)
|
43 |
+
|
44 |
+
# prepare a processor
|
45 |
+
processor = VideoBlipProcessor.from_pretrained("Salesforce/blip2-opt-2.7b")
|
46 |
+
tokenizer = LlamaTokenizer.from_pretrained("novelai/nerdstash-tokenizer-v1", additional_special_tokens=['鈻佲杹'])
|
47 |
+
processor.tokenizer = tokenizer
|
48 |
+
|
49 |
+
import requests
|
50 |
+
from PIL import Image
|
51 |
+
|
52 |
+
# prepare inputs
|
53 |
+
url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg"
|
54 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
55 |
+
|
56 |
+
text = f"##human: 銇撱伄鐢诲儚銇潰鐧姐亜鐐广伅浣曘仹銇欍亱?\n##gpt: "
|
57 |
+
|
58 |
+
# do preprocessing
|
59 |
+
inputs = processor(
|
60 |
+
text=text,
|
61 |
+
images=image,
|
62 |
+
return_tensors="pt",
|
63 |
+
truncation=True,
|
64 |
+
)
|
65 |
+
|
66 |
+
inputs = {k: v.to(device) for k, v in inputs.items()}
|
67 |
+
inputs["pixel_values"] = inputs["pixel_values"].to(device, torch.float16)
|
68 |
+
|
69 |
+
# set eos token
|
70 |
+
eos_token_id_list = [
|
71 |
+
processor.tokenizer.pad_token_id,
|
72 |
+
processor.tokenizer.eos_token_id,
|
73 |
+
int(tokenizer.convert_tokens_to_ids("##"))
|
74 |
+
]
|
75 |
+
|
76 |
+
# do inference
|
77 |
+
with torch.no_grad():
|
78 |
+
out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list, no_repeat_ngram_size=2)
|
79 |
+
|
80 |
+
# print result
|
81 |
+
print(processor.tokenizer.batch_decode(out))
|
82 |
+
```
|
83 |
+
|
84 |
+
|
85 |
+
## Model Details
|
86 |
+
* **Developed by**: [Turing Inc.](https://www.turing-motors.com/)
|
87 |
+
* **Adaptor type**: [BLIP2](https://arxiv.org/abs/2301.12597)
|
88 |
+
* **Lamguage Model**: [Japanese StableLM Base Alpha](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b)
|
89 |
+
* **Language(s)**: Japanese
|
90 |
+
|
91 |
+
### Training
|
92 |
+
This model was fully fine-tuned with LLaVA-Instruct-620K-JA.
|
93 |
+
|
94 |
+
### Training Dataset
|
95 |
+
|
96 |
+
- LLaVA-Instruct-620K-JA
|
97 |
+
|
98 |
+
|
99 |
+
## Use and Limitations
|
100 |
+
|
101 |
+
### Intended Use
|
102 |
+
|
103 |
+
This model is intended for use in chat-like applications and for research purposes.
|
104 |
+
|
105 |
+
### Limitations
|
106 |
+
|
107 |
+
The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage.
|
108 |
+
|
109 |
+
## How to cite
|
110 |
+
```bibtex
|
111 |
+
@misc{BlipJapaneseStableLM,
|
112 |
+
url = {[https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0)},
|
113 |
+
title = {Heron BLIP Japanese StableLM Base 7B},
|
114 |
+
author = {Kotaro Tanahashi, Yuichi Inoue, and Yu Yamaguchi}
|
115 |
+
}
|
116 |
+
```
|
117 |
+
|
118 |
+
## Citations
|
119 |
+
|
120 |
+
```bibtex
|
121 |
+
@misc{JapaneseInstructBLIPAlpha,
|
122 |
+
url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)},
|
123 |
+
title = {Japanese InstructBLIP Alpha},
|
124 |
+
author = {Shing, Makoto and Akiba, Takuya}
|
125 |
+
}
|
126 |
+
```
|
127 |
+
|
128 |
+
---
|
129 |
+
license: cc-by-nc-4.0
|
130 |
+
---
|