Upload folder using huggingface_hub (#1)
Browse files- c7a68814bf49fc324cba26b35ad6b3704c2b6a43c7d6b8e76d79d745aa14d065 (851e46a7f433d5aa0f472b0e118411f1e85271bc)
- 4151a1ca76b22c8fd3fa602695587cb5382e17ccb1f884f6ffb22f4d9e631445 (68a105086a889700a333f030631c71ec10f52d3e)
- .gitattributes +1 -0
- README.md +59 -0
- config.json +1 -0
- model +3 -0
- plots.png +0 -0
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
model filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
library_name: pruna-engine
|
4 |
+
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
|
5 |
+
metrics:
|
6 |
+
- memory_disk
|
7 |
+
- memory_inference
|
8 |
+
- inference_latency
|
9 |
+
- inference_throughput
|
10 |
+
- inference_CO2_emissions
|
11 |
+
- inference_energy_consumption
|
12 |
+
---
|
13 |
+
<!-- header start -->
|
14 |
+
<!-- 200823 -->
|
15 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
16 |
+
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
|
17 |
+
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
18 |
+
</a>
|
19 |
+
</div>
|
20 |
+
<!-- header end -->
|
21 |
+
|
22 |
+
# Simply make AI models cheaper, smaller, faster, and greener!
|
23 |
+
|
24 |
+
## Results
|
25 |
+
|
26 |
+
![image info](./plots.png)
|
27 |
+
|
28 |
+
## Setup
|
29 |
+
|
30 |
+
You can run the smashed model by:
|
31 |
+
1. Installing and importing the `pruna-engine` (version 0.2.9) package. Use `pip install pruna --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com` for installation. See [Pypi](https://pypi.org/project/pruna-engine/) for detailed on the package.
|
32 |
+
2. Downloading the model files at `model_path`. This can be done using huggingface with this repository name or with manual downloading.
|
33 |
+
3. Loading the model
|
34 |
+
4. Running the model.
|
35 |
+
|
36 |
+
You can achieve this by running the following code:
|
37 |
+
|
38 |
+
```python
|
39 |
+
from transformers.utils.hub import cached_file
|
40 |
+
from pruna_engine.PrunaModel import PrunaModel # Step (1): install and import `pruna-engine` package.
|
41 |
+
|
42 |
+
...
|
43 |
+
model_path = cached_file("PrunaAI/REPO", "model") # Step (2): download the model files at `model_path`.
|
44 |
+
smashed_model = PrunaModel.load_model(model_path) # Step (3): load the model.
|
45 |
+
y = smashed_model(x) # Step (4): run the model.
|
46 |
+
```
|
47 |
+
|
48 |
+
## Configurations
|
49 |
+
|
50 |
+
The configuration info are in `config.json`.
|
51 |
+
|
52 |
+
## License
|
53 |
+
|
54 |
+
We follow the same license as the original model. Please check the license of the original model before using this model.
|
55 |
+
|
56 |
+
## Want to compress other models?
|
57 |
+
|
58 |
+
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
|
59 |
+
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
|
config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"pruner": "None", "pruning_ratio": "None", "factorizer": "None", "quantizer": "None", "n_quantization_bits": 16, "output_deviation": 0.0, "compiler": "diffusers", "static_batch": true, "static_shape": false, "controlnet": "None", "unet_dim": 4, "device": "cuda", "max_batch_size": 1, "image_height": 768, "image_width": 768, "version": "1.5", "scheduler": "LCMScheduler", "num_inference_steps": 4, "tokenizer_name": "placeholder"}
|
model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c5e2279249f927e18c0e73d52b808950456630aea49d4c23a1c4404a832a3c8b
|
3 |
+
size 7938814248
|
plots.png
ADDED