johnrachwanpruna commited on
Commit
73f95e1
1 Parent(s): 625daae

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: pruna-engine
3
+ thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
4
+ metrics:
5
+ - memory_disk
6
+ - memory_inference
7
+ - inference_latency
8
+ - inference_throughput
9
+ - inference_CO2_emissions
10
+ - inference_energy_consumption
11
+ ---
12
+ <!-- header start -->
13
+ <!-- 200823 -->
14
+ <div style="width: auto; margin-left: auto; margin-right: auto">
15
+ <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
16
+ <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
17
+ </a>
18
+ </div>
19
+ <!-- header end -->
20
+
21
+ [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
22
+ [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
23
+ [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
24
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/vb6SmA3hxu)
25
+
26
+ # Simply make AI models cheaper, smaller, faster, and greener!
27
+
28
+ - Give a thumbs up if you like this model!
29
+ - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
30
+ - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
31
+ - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
32
+ - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
33
+
34
+ **Frequently Asked Questions**
35
+ - ***How does the compression work?*** The model is compressed by using bitsandbytes.
36
+ - ***How does the model quality change?*** The quality of the model output will slightly degrade.
37
+ - ***What is the model format?*** We the standard safetensors format.
38
+ - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
39
+
40
+ # Usage
41
+ ```python
42
+ from transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration
43
+ import torch
44
+ from PIL import Image
45
+ import requests
46
+
47
+ processor = LlavaNextProcessor.from_pretrained("llava-v1.6-vicuna-7b-hf")
48
+
49
+ model = LlavaNextForConditionalGeneration.from_pretrained("llava-v1.6-vicuna-7b-hf", torch_dtype=torch.float16, low_cpu_mem_usage=True)
50
+ model.to("cuda:0")
51
+
52
+ # prepare image and text prompt, using the appropriate prompt template
53
+ url = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"
54
+ image = Image.open(requests.get(url, stream=True).raw)
55
+ prompt = "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions. USER: <image>\nWhat is shown in this image? ASSISTANT:"
56
+
57
+
58
+ inputs = processor(prompt, image, return_tensors="pt").to("cuda:0")
59
+
60
+ # autoregressively complete prompt
61
+ output = model.generate(**inputs, max_new_tokens=100)
62
+
63
+ print(processor.decode(output[0], skip_special_tokens=True))
64
+ ```
65
+
66
+ ## Credits & License
67
+
68
+ The license of the smashed model follows the license of the original model. Please check the license of the original model liuhaotian/llava-v1.6-vicuna-7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
69
+
70
+ ## Want to compress other models?
71
+
72
+ - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
73
+ - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).