jinjieyuan
commited on
Commit
•
3b8e6a9
1
Parent(s):
40f4790
Create README.md
Browse filesSigned-off-by: jinjieyuan <[email protected]>
README.md
ADDED
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: apache-2.0
|
4 |
+
---
|
5 |
+
|
6 |
+
# Shears Model Card: Shears-llama-13b-50-math-heuristic
|
7 |
+
|
8 |
+
Fine tuned model on LLaMA-13B with some math reasoning datasets using Shears.
|
9 |
+
|
10 |
+
## Model Details
|
11 |
+
|
12 |
+
### Information
|
13 |
+
|
14 |
+
- **Model name:** Shears-llama-13b-50-math-heuristic
|
15 |
+
- **Base model:** [LLaMA-13b](https://huggingface.co/yahma/llama-13b-hf)
|
16 |
+
- **Sparsity:** 50%
|
17 |
+
- **Domain:** Math
|
18 |
+
- **Subnetwork version:** Heuristic
|
19 |
+
|
20 |
+
### Adapter Configuration
|
21 |
+
|
22 |
+
- **LoRA rank:** 32 (24 in the heuristic subnetwork)
|
23 |
+
- **LoRA alpha:** 64
|
24 |
+
- **LoRA target modules:** q_proj, k_proj, v_proj, up_proj, down_proj
|
25 |
+
- **LoRA rank search space:** [32, 24, 16]
|
26 |
+
|
27 |
+
### Training Hyperparameters
|
28 |
+
|
29 |
+
- **Batch size:** 16
|
30 |
+
- **Learning rate:** 3e-4
|
31 |
+
- **Epoch:** 3
|
32 |
+
|
33 |
+
### Training Data
|
34 |
+
|
35 |
+
Unified math reasoning dataset: [math_10k.json](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/ft-training_set/math_10k.json) (collected with the training sets of GSM8K, MAWPS, and AQuA).
|
36 |
+
|
37 |
+
### Evaluation Data
|
38 |
+
[GSM8K](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/gsm8k/test.json), [AQuA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/AQuA/test.json), [MAWPS](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/mawps/test.json), [SVAMP](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/SVAMP/test.json)
|
39 |
+
|
40 |
+
|
41 |
+
## How to use
|
42 |
+
|
43 |
+
```python
|
44 |
+
import torch
|
45 |
+
from peft import PeftModel
|
46 |
+
from transformers import AutoModelForCausalLM
|
47 |
+
from transformers import AutoTokenizer
|
48 |
+
|
49 |
+
def generate_prompt(instruction):
|
50 |
+
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
51 |
+
|
52 |
+
### Instruction:
|
53 |
+
{instruction}
|
54 |
+
|
55 |
+
### Response:
|
56 |
+
"""
|
57 |
+
|
58 |
+
base_model_path = "shears-llama-13b-50-math-heuristic/base_model"
|
59 |
+
adapter_model_path = "shears-llama-13b-50-math-heuristic/adapter_model"
|
60 |
+
base_model = AutoModelForCausalLM.from_pretrained(base_model_path)
|
61 |
+
model = PeftModel.from_pretrained(base_model, adapter_model_path)
|
62 |
+
model.eval()
|
63 |
+
|
64 |
+
non_zero_params = sum([(param.data != 0).sum().item() for _, param in model.named_parameters()])
|
65 |
+
print(f"Number of all non-zero parameters: {non_zero_params}")
|
66 |
+
|
67 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
68 |
+
tokenizer.pad_token_id = 0
|
69 |
+
|
70 |
+
instruction = "Edgar eats 18 pretzels a day. If his brother eats 1/2 as many, how many does his brother eat in a week?"
|
71 |
+
prompt = generate_prompt(instruction)
|
72 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
73 |
+
input_ids = inputs["input_ids"].to(model.device)
|
74 |
+
with torch.no_grad():
|
75 |
+
generation_output = model.generate(
|
76 |
+
input_ids=input_ids,
|
77 |
+
return_dict_in_generate=True,
|
78 |
+
output_scores=True,
|
79 |
+
max_new_tokens=256,
|
80 |
+
use_cache=True,
|
81 |
+
num_beams=4,
|
82 |
+
)
|
83 |
+
s = generation_output.sequences[0]
|
84 |
+
output = tokenizer.decode(s)
|
85 |
+
print(output)
|
86 |
+
|
87 |
+
```
|
88 |
+
|
89 |
+
## Evaluation Results
|
90 |
+
|
91 |
+
| Model | Sparsity | GSM8K | AQuA | MAWPS | SVAMP | Average |
|
92 |
+
|-----------------------|-------------|-------|-------|-------|-------|---------|
|
93 |
+
| LLaMA-7B-LoRA | - | 37.5 | 18.9 | 79.0 | 52.1 | 46.9 |
|
94 |
+
| [**LLaMA-7B-Shears**](https://huggingface.co/IntelLabs/shears-llama-7b-50-math-heuristic) | **50%** | 36.1 | 22.0 | 78.6 | 44.5 | 45.3 |
|
95 |
+
| LLaMA-13B-LoRA | - | 47.5 | 18.5 | 83.6 | 54.6 | 51.1 |
|
96 |
+
| [**LLaMA-13B-Shears**](https://huggingface.co/IntelLabs/shears-llama-13b-50-math-heuristic) | **50%** | 45.1 | 22.0 | 83.2 | 53.3 | 50.9 |
|
97 |
+
|
98 |
+
## Model Sources
|
99 |
+
|
100 |
+
- **Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears)
|
101 |
+
- **Paper:** [Shears: Unstructured Sparsity with Neural Low-rank Adapter Search]()
|
102 |
+
|
103 |
+
## Citation
|
104 |
+
|
105 |
+
```bash
|
106 |
+
@article{munoz2024shears,
|
107 |
+
title = {Shears: Unstructured Sparsity with Neural Low-rank Adapter Search},
|
108 |
+
author={J. Pablo Munoz and Jinjie Yuan and Nilesh Jain},
|
109 |
+
journal={},
|
110 |
+
year={2024}
|
111 |
+
}
|
112 |
+
```
|
113 |
+
|
114 |
+
## License
|
115 |
+
|
116 |
+
Apache-2.0
|